mbox series

[00/37] Shared Virtual Addressing for the IOMMU

Message ID 20180212183352.22730-1-jean-philippe.brucker@arm.com
Headers show
Series Shared Virtual Addressing for the IOMMU | expand

Message

Jean-Philippe Brucker Feb. 12, 2018, 6:33 p.m. UTC
Shared Virtual Addressing (SVA) is the ability to share process address
spaces with devices. It is called "SVM" (Shared Virtual Memory) by
OpenCL and some IOMMU architectures, but since that abbreviation is
already used for AMD virtualisation in Linux (Secure Virtual Machine),
we prefer the less ambiguous "SVA".

Sharing process address spaces with devices allows to rely on core kernel
memory management for DMA, removing some complexity from application and
device drivers. After binding to a device, applications can instruct it to
perform DMA on buffers obtained with malloc.

The device, buses and the IOMMU must support the following features:

* Multiple address spaces per device, for example using the PCI PASID
  (Process Address Space ID) extension. The IOMMU driver allocates a
  PASID and the device uses it in DMA transactions.

* I/O Page Faults (IOPF), for example PCI PRI (Page Request Interface) or
  Arm SMMU stall. The core mm handles translation faults from the IOMMU.

* MMU and IOMMU implement compatible page table formats.

This series requires to support all three features. I tried to
facilitate using only a subset of them but enabling it requires more
work. Upcoming patches will enable private PASID management, which
allows device driver to use an API similar to classical DMA,
map()/unmap() on PASIDs. In the future device drivers should also be
able to use SVA without IOPF by pinning all pages, or without PASID by
sharing the single device address space with a process.

Although we don't have any performance measurement at the moment, SVA
will likely be slower than classical DMA since it relies on page faults,
whereas classical DMA pins all pages in memory. SVA mostly aims at
simplifying DMA management, but also improves security by isolating
address spaces in devices.

Intel and AMD IOMMU drivers already offer slightly differing public
functions that bind process address spaces to devices. Because they don't
go through an architecture-agnostic API, only integrated devices could
use them so far.
                                ---

The series adds an SVA API to the IOMMU core, an example implementation
(SMMUv3), and an example user (VFIO). Since last version, sent as RFCv2
in October [1], I reworked the API and fixed some bugs.

Patches 1-6 introduce the bind API and track address spaces. This
version of the patchset improves documentation, adds device_init()/
shutdown(), and per-bond device driver data. Functions available to
device drivers are:

	iommu_sva_device_init(dev, features, max_pasid)
	iommu_sva_device_shutdown(dev)
	iommu_register_mm_exit_handler(dev, handler)
	iommu_unregister_mm_exit_handler(dev)
	iommu_sva_bind_device(dev, mm, *pasid, flags, drvdata)
	iommu_sva_unbind_device(dev, pasid)

Patches 7-10 add a generic fault handler. This version reuses the
structures introduced by Jacob Pan's vSVA series [2] (with some changes
to match the most recent comments in that thread).

Patches 11-36 add complete SVA support to the SMMUv3 driver, for both
platform and PCI devices. If you don't care about SMMU I advise to only
look at patches 25, 27, 29 and 35, which use the tools introduced
earlier.

In this version, context code for SMMUv3 moved to a separate module,
behind an interface reusable by other IOMMU drivers, and easily
extensible for private PASIDs. There are complicated interactions
between private and shared contexts (they have a common ASID space), so
moving it all to a separate file also helps making sense of refs and
locks.

Finally, patch 37 adds an ioctl to VFIO providing SVA to userspace
drivers. Since last version I fixed a few bugs.

You can pull the full series based onto v4.16-rc1+fault patches at:
git://linux-arm.org/linux-jpb.git sva/v1

I tested this code on a software model implementing an SMMUv3 and a
dummy DMA devices. Any testing report would be greatly appreciated!

[1] [RFCv2 PATCH 00/36] Process management for IOMMU + SVM for SMMUv3
    https://www.spinics.net/lists/arm-kernel/msg609771.html
[2] [PATCH v3 00/16] IOMMU driver support for SVM virtualization
    https://www.spinics.net/lists/kernel/msg2651481.html

Jean-Philippe Brucker (37):
  iommu: Introduce Shared Virtual Addressing API
  iommu/sva: Bind process address spaces to devices
  iommu/sva: Manage process address spaces
  iommu/sva: Add a mm_exit callback for device drivers
  iommu/sva: Track mm changes with an MMU notifier
  iommu/sva: Search mm by PASID
  iommu: Add a page fault handler
  iommu/fault: Handle mm faults
  iommu/fault: Let handler return a fault response
  iommu/fault: Allow blocking fault handlers
  dt-bindings: document stall and PASID properties for IOMMU masters
  iommu/of: Add stall and pasid properties to iommu_fwspec
  arm64: mm: Pin down ASIDs for sharing mm with devices
  iommu/arm-smmu-v3: Link domains and devices
  iommu/io-pgtable-arm: Factor out ARM LPAE register defines
  iommu: Add generic PASID table library
  iommu/arm-smmu-v3: Move context descriptor code
  iommu/arm-smmu-v3: Add support for Substream IDs
  iommu/arm-smmu-v3: Add second level of context descriptor table
  iommu/arm-smmu-v3: Share process page tables
  iommu/arm-smmu-v3: Seize private ASID
  iommu/arm-smmu-v3: Add support for VHE
  iommu/arm-smmu-v3: Enable broadcast TLB maintenance
  iommu/arm-smmu-v3: Add SVA feature checking
  iommu/arm-smmu-v3: Implement mm operations
  iommu/arm-smmu-v3: Add support for Hardware Translation Table Update
  iommu/arm-smmu-v3: Register fault workqueue
  iommu/arm-smmu-v3: Maintain a SID->device structure
  iommu/arm-smmu-v3: Add stall support for platform devices
  ACPI/IORT: Check ATS capability in root complex nodes
  iommu/arm-smmu-v3: Add support for PCI ATS
  iommu/arm-smmu-v3: Hook up ATC invalidation to mm ops
  iommu/arm-smmu-v3: Disable tagged pointers
  PCI: Make "PRG Response PASID Required" handling common
  iommu/arm-smmu-v3: Add support for PRI
  iommu/arm-smmu-v3: Add support for PCI PASID
  vfio: Add support for Shared Virtual Addressing

 Documentation/devicetree/bindings/iommu/iommu.txt |   24 +
 MAINTAINERS                                       |    3 +-
 arch/arm64/include/asm/mmu.h                      |    1 +
 arch/arm64/include/asm/mmu_context.h              |   11 +-
 arch/arm64/mm/context.c                           |   87 +-
 drivers/acpi/arm64/iort.c                         |   11 +
 drivers/iommu/Kconfig                             |   42 +
 drivers/iommu/Makefile                            |    4 +
 drivers/iommu/amd_iommu.c                         |   19 +-
 drivers/iommu/arm-smmu-v3-context.c               |  728 +++++++++++
 drivers/iommu/arm-smmu-v3.c                       | 1395 ++++++++++++++++++---
 drivers/iommu/io-pgfault.c                        |  384 ++++++
 drivers/iommu/io-pgtable-arm.c                    |   48 +-
 drivers/iommu/io-pgtable-arm.h                    |   67 +
 drivers/iommu/iommu-pasid.c                       |   54 +
 drivers/iommu/iommu-pasid.h                       |  173 +++
 drivers/iommu/iommu-sva.c                         |  795 ++++++++++++
 drivers/iommu/iommu.c                             |  109 +-
 drivers/iommu/of_iommu.c                          |   12 +
 drivers/pci/ats.c                                 |   17 +
 drivers/vfio/vfio_iommu_type1.c                   |  399 ++++++
 include/linux/iommu.h                             |  217 +++-
 include/linux/pci-ats.h                           |    8 +
 include/uapi/linux/pci_regs.h                     |    1 +
 include/uapi/linux/vfio.h                         |   76 ++
 25 files changed, 4381 insertions(+), 304 deletions(-)
 create mode 100644 drivers/iommu/arm-smmu-v3-context.c
 create mode 100644 drivers/iommu/io-pgfault.c
 create mode 100644 drivers/iommu/io-pgtable-arm.h
 create mode 100644 drivers/iommu/iommu-pasid.c
 create mode 100644 drivers/iommu/iommu-pasid.h
 create mode 100644 drivers/iommu/iommu-sva.c

Comments

Xu Zaibo Feb. 13, 2018, 1:46 a.m. UTC | #1
Hi,

On 2018/2/13 2:33, Jean-Philippe Brucker wrote:
> The SMMU provides a Stall model for handling page faults in platform
> devices. It is similar to PCI PRI, but doesn't require devices to have
> their own translation cache. Instead, faulting transactions are parked and
> the OS is given a chance to fix the page tables and retry the transaction.
>
> Enable stall for devices that support it (opt-in by firmware). When an
> event corresponds to a translation error, call the IOMMU fault handler. If
> the fault is recoverable, it will call us back to terminate or continue
> the stall.
>
> Note that this patch tweaks the iommu_fault_event and page_response_msg to
> extend the fault id field. Stall uses 16 bits of IDs whereas PCI PRI only
> uses 9.
For PCIe devices without ATC,  can they use this Stall model?

Thanks.

Xu Zaibo
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>   drivers/iommu/arm-smmu-v3.c | 175 +++++++++++++++++++++++++++++++++++++++++++-
>   include/linux/iommu.h       |   4 +-
>   2 files changed, 173 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index 2430b2140f8d..8b9f5dd06be0 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -338,6 +338,15 @@
>   #define CMDQ_PRI_1_RESP_FAIL		(1UL << CMDQ_PRI_1_RESP_SHIFT)
>   #define CMDQ_PRI_1_RESP_SUCC		(2UL << CMDQ_PRI_1_RESP_SHIFT)
>   
> +#define CMDQ_RESUME_0_SID_SHIFT		32
> +#define CMDQ_RESUME_0_SID_MASK		0xffffffffUL
> +#define CMDQ_RESUME_0_ACTION_SHIFT	12
> +#define CMDQ_RESUME_0_ACTION_TERM	(0UL << CMDQ_RESUME_0_ACTION_SHIFT)
> +#define CMDQ_RESUME_0_ACTION_RETRY	(1UL << CMDQ_RESUME_0_ACTION_SHIFT)
> +#define CMDQ_RESUME_0_ACTION_ABORT	(2UL << CMDQ_RESUME_0_ACTION_SHIFT)
> +#define CMDQ_RESUME_1_STAG_SHIFT	0
> +#define CMDQ_RESUME_1_STAG_MASK		0xffffUL
> +
>   #define CMDQ_SYNC_0_CS_SHIFT		12
>   #define CMDQ_SYNC_0_CS_NONE		(0UL << CMDQ_SYNC_0_CS_SHIFT)
>   #define CMDQ_SYNC_0_CS_IRQ		(1UL << CMDQ_SYNC_0_CS_SHIFT)
> @@ -358,6 +367,31 @@
>   #define EVTQ_0_ID_SHIFT			0
>   #define EVTQ_0_ID_MASK			0xffUL
>   
> +#define EVT_ID_TRANSLATION_FAULT	0x10
> +#define EVT_ID_ADDR_SIZE_FAULT		0x11
> +#define EVT_ID_ACCESS_FAULT		0x12
> +#define EVT_ID_PERMISSION_FAULT		0x13
> +
> +#define EVTQ_0_SSV			(1UL << 11)
> +#define EVTQ_0_SSID_SHIFT		12
> +#define EVTQ_0_SSID_MASK		0xfffffUL
> +#define EVTQ_0_SID_SHIFT		32
> +#define EVTQ_0_SID_MASK			0xffffffffUL
> +#define EVTQ_1_STAG_SHIFT		0
> +#define EVTQ_1_STAG_MASK		0xffffUL
> +#define EVTQ_1_STALL			(1UL << 31)
> +#define EVTQ_1_PRIV			(1UL << 33)
> +#define EVTQ_1_EXEC			(1UL << 34)
> +#define EVTQ_1_READ			(1UL << 35)
> +#define EVTQ_1_S2			(1UL << 39)
> +#define EVTQ_1_CLASS_SHIFT		40
> +#define EVTQ_1_CLASS_MASK		0x3UL
> +#define EVTQ_1_TT_READ			(1UL << 44)
> +#define EVTQ_2_ADDR_SHIFT		0
> +#define EVTQ_2_ADDR_MASK		0xffffffffffffffffUL
> +#define EVTQ_3_IPA_SHIFT		12
> +#define EVTQ_3_IPA_MASK			0xffffffffffUL
> +
>   /* PRI queue */
>   #define PRIQ_ENT_DWORDS			2
>   #define PRIQ_MAX_SZ_SHIFT		8
> @@ -472,6 +506,13 @@ struct arm_smmu_cmdq_ent {
>   			enum pri_resp		resp;
>   		} pri;
>   
> +		#define CMDQ_OP_RESUME		0x44
> +		struct {
> +			u32			sid;
> +			u16			stag;
> +			enum page_response_code	resp;
> +		} resume;
> +
>   		#define CMDQ_OP_CMD_SYNC	0x46
>   		struct {
>   			u32			msidata;
> @@ -545,6 +586,8 @@ struct arm_smmu_strtab_ent {
>   	bool				assigned;
>   	struct arm_smmu_s1_cfg		*s1_cfg;
>   	struct arm_smmu_s2_cfg		*s2_cfg;
> +
> +	bool				can_stall;
>   };
>   
>   struct arm_smmu_strtab_cfg {
> @@ -904,6 +947,21 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>   			return -EINVAL;
>   		}
>   		break;
> +	case CMDQ_OP_RESUME:
> +		cmd[0] |= (u64)ent->resume.sid << CMDQ_RESUME_0_SID_SHIFT;
> +		cmd[1] |= ent->resume.stag << CMDQ_RESUME_1_STAG_SHIFT;
> +		switch (ent->resume.resp) {
> +		case IOMMU_PAGE_RESP_INVALID:
> +		case IOMMU_PAGE_RESP_FAILURE:
> +			cmd[0] |= CMDQ_RESUME_0_ACTION_ABORT;
> +			break;
> +		case IOMMU_PAGE_RESP_SUCCESS:
> +			cmd[0] |= CMDQ_RESUME_0_ACTION_RETRY;
> +			break;
> +		default:
> +			return -EINVAL;
> +		}
> +		break;
>   	case CMDQ_OP_CMD_SYNC:
>   		if (ent->sync.msiaddr)
>   			cmd[0] |= CMDQ_SYNC_0_CS_IRQ;
> @@ -1065,6 +1123,35 @@ static void arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>   		dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
>   }
>   
> +static int arm_smmu_page_response(struct iommu_domain *domain,
> +				  struct device *dev,
> +				  struct page_response_msg *resp)
> +{
> +	int sid = dev->iommu_fwspec->ids[0];
> +	struct arm_smmu_cmdq_ent cmd = {0};
> +	struct arm_smmu_master_data *master = dev->iommu_fwspec->iommu_priv;
> +
> +	if (master->ste.can_stall) {
> +		cmd.opcode		= CMDQ_OP_RESUME;
> +		cmd.resume.sid		= sid;
> +		cmd.resume.stag		= resp->page_req_group_id;
> +		cmd.resume.resp		= resp->resp_code;
> +	} else {
> +		/* TODO: put PRI response here */
> +		return -EINVAL;
> +	}
> +
> +	arm_smmu_cmdq_issue_cmd(master->smmu, &cmd);
> +	/*
> +	 * Don't send a SYNC, it doesn't do anything for RESUME or PRI_RESP.
> +	 * RESUME consumption guarantees that the stalled transaction will be
> +	 * terminated... at some point in the future. PRI_RESP is fire and
> +	 * forget.
> +	 */
> +
> +	return 0;
> +}
> +
>   /* Stream table manipulation functions */
>   static void
>   arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
> @@ -1182,7 +1269,8 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
>   			 STRTAB_STE_1_STRW_SHIFT);
>   
>   		if (smmu->features & ARM_SMMU_FEAT_STALLS &&
> -		   !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE))
> +		   !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE) &&
> +		   !ste->can_stall)
>   			dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD);
>   
>   		val |= (cfg->base & STRTAB_STE_0_S1CTXPTR_MASK
> @@ -1285,10 +1373,73 @@ arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid)
>   	return master;
>   }
>   
> +static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt)
> +{
> +	struct arm_smmu_master_data *master;
> +	u8 type = evt[0] >> EVTQ_0_ID_SHIFT & EVTQ_0_ID_MASK;
> +	u32 sid = evt[0] >> EVTQ_0_SID_SHIFT & EVTQ_0_SID_MASK;
> +
> +	struct iommu_fault_event fault = {
> +		.page_req_group_id = evt[1] >> EVTQ_1_STAG_SHIFT & EVTQ_1_STAG_MASK,
> +		.addr		= evt[2] >> EVTQ_2_ADDR_SHIFT & EVTQ_2_ADDR_MASK,
> +		.last_req	= true,
> +	};
> +
> +	switch (type) {
> +	case EVT_ID_TRANSLATION_FAULT:
> +	case EVT_ID_ADDR_SIZE_FAULT:
> +	case EVT_ID_ACCESS_FAULT:
> +		fault.reason = IOMMU_FAULT_REASON_PTE_FETCH;
> +		break;
> +	case EVT_ID_PERMISSION_FAULT:
> +		fault.reason = IOMMU_FAULT_REASON_PERMISSION;
> +		break;
> +	default:
> +		/* TODO: report other unrecoverable faults. */
> +		return -EFAULT;
> +	}
> +
> +	/* Stage-2 is always pinned at the moment */
> +	if (evt[1] & EVTQ_1_S2)
> +		return -EFAULT;
> +
> +	master = arm_smmu_find_master(smmu, sid);
> +	if (!master)
> +		return -EINVAL;
> +
> +	/*
> +	 * The domain is valid until the fault returns, because detach() flushes
> +	 * the fault queue.
> +	 */
> +	if (evt[1] & EVTQ_1_STALL)
> +		fault.type = IOMMU_FAULT_PAGE_REQ;
> +	else
> +		fault.type = IOMMU_FAULT_DMA_UNRECOV;
> +
> +	if (evt[1] & EVTQ_1_READ)
> +		fault.prot |= IOMMU_FAULT_READ;
> +	else
> +		fault.prot |= IOMMU_FAULT_WRITE;
> +
> +	if (evt[1] & EVTQ_1_EXEC)
> +		fault.prot |= IOMMU_FAULT_EXEC;
> +
> +	if (evt[1] & EVTQ_1_PRIV)
> +		fault.prot |= IOMMU_FAULT_PRIV;
> +
> +	if (evt[0] & EVTQ_0_SSV) {
> +		fault.pasid_valid = true;
> +		fault.pasid = evt[0] >> EVTQ_0_SSID_SHIFT & EVTQ_0_SSID_MASK;
> +	}
> +
> +	/* Report to device driver or populate the page tables */
> +	return iommu_report_device_fault(master->dev, &fault);
> +}
> +
>   /* IRQ and event handlers */
>   static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
>   {
> -	int i;
> +	int i, ret;
>   	int num_handled = 0;
>   	struct arm_smmu_device *smmu = dev;
>   	struct arm_smmu_queue *q = &smmu->evtq.q;
> @@ -1300,12 +1451,19 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
>   		while (!queue_remove_raw(q, evt)) {
>   			u8 id = evt[0] >> EVTQ_0_ID_SHIFT & EVTQ_0_ID_MASK;
>   
> +			spin_unlock(&q->wq.lock);
> +			ret = arm_smmu_handle_evt(smmu, evt);
> +			spin_lock(&q->wq.lock);
> +
>   			if (++num_handled == queue_size) {
>   				q->batch++;
>   				wake_up_locked(&q->wq);
>   				num_handled = 0;
>   			}
>   
> +			if (!ret)
> +				continue;
> +
>   			dev_info(smmu->dev, "event 0x%02x received:\n", id);
>   			for (i = 0; i < ARRAY_SIZE(evt); ++i)
>   				dev_info(smmu->dev, "\t0x%016llx\n",
> @@ -1442,7 +1600,9 @@ static int arm_smmu_flush_queues(struct notifier_block *nb,
>   		master = dev->iommu_fwspec->iommu_priv;
>   
>   	if (master) {
> -		/* TODO: add support for PRI and Stall */
> +		if (master->ste.can_stall)
> +			arm_smmu_flush_queue(smmu, &smmu->evtq.q, "evtq");
> +		/* TODO: add support for PRI */
>   		return 0;
>   	}
>   
> @@ -1756,7 +1916,8 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
>   		.order			= master->ssid_bits,
>   		.sync			= &arm_smmu_ctx_sync,
>   		.arm_smmu = {
> -			.stall		= !!(smmu->features & ARM_SMMU_FEAT_STALL_FORCE),
> +			.stall		= !!(smmu->features & ARM_SMMU_FEAT_STALL_FORCE) ||
> +					  master->ste.can_stall,
>   			.asid_bits	= smmu->asid_bits,
>   			.hw_access	= !!(smmu->features & ARM_SMMU_FEAT_HA),
>   			.hw_dirty	= !!(smmu->features & ARM_SMMU_FEAT_HD),
> @@ -2296,6 +2457,11 @@ static int arm_smmu_add_device(struct device *dev)
>   
>   	master->ssid_bits = min(smmu->ssid_bits, fwspec->num_pasid_bits);
>   
> +	if (fwspec->can_stall && smmu->features & ARM_SMMU_FEAT_STALLS) {
> +		master->can_fault = true;
> +		master->ste.can_stall = true;
> +	}
> +
>   	group = iommu_group_get_for_dev(dev);
>   	if (!IS_ERR(group)) {
>   		arm_smmu_insert_master(smmu, master);
> @@ -2435,6 +2601,7 @@ static struct iommu_ops arm_smmu_ops = {
>   	.mm_attach		= arm_smmu_mm_attach,
>   	.mm_detach		= arm_smmu_mm_detach,
>   	.mm_invalidate		= arm_smmu_mm_invalidate,
> +	.page_response		= arm_smmu_page_response,
>   	.map			= arm_smmu_map,
>   	.unmap			= arm_smmu_unmap,
>   	.map_sg			= default_iommu_map_sg,
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 37c3b9d087ce..f5c2f4be2b42 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -227,7 +227,7 @@ struct page_response_msg {
>   	u32 pasid;
>   	enum page_response_code resp_code;
>   	u32 pasid_present:1;
> -	u32 page_req_group_id : 9;
> +	u32 page_req_group_id;
>   	enum page_response_type type;
>   	u32 private_data;
>   };
> @@ -421,7 +421,7 @@ struct iommu_fault_event {
>   	enum iommu_fault_reason reason;
>   	u64 addr;
>   	u32 pasid;
> -	u32 page_req_group_id : 9;
> +	u32 page_req_group_id;
>   	u32 last_req : 1;
>   	u32 pasid_valid : 1;
>   	u32 prot;


--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tian, Kevin Feb. 13, 2018, 7:31 a.m. UTC | #2
> From: Jean-Philippe Brucker
> Sent: Tuesday, February 13, 2018 2:33 AM
> 
> Shared Virtual Addressing (SVA) provides a way for device drivers to bind
> process address spaces to devices. This requires the IOMMU to support the
> same page table format as CPUs, and requires the system to support I/O

"same" is a bit restrictive. "compatible" is better as you used in coverletter. :-)

> Page Faults (IOPF) and Process Address Space ID (PASID). When all of these
> are available, DMA can access virtual addresses of a process. A PASID is
> allocated for each process, and the device driver programs it into the
> device in an implementation-specific way.
> 
> Add a new API for sharing process page tables with devices. Introduce two
> IOMMU operations, sva_device_init() and sva_device_shutdown(), that
> prepare the IOMMU driver for SVA. For example allocate PASID tables and
> fault queues. Subsequent patches will implement the bind() and unbind()
> operations.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/Kconfig     | 10 ++++++
>  drivers/iommu/Makefile    |  1 +
>  drivers/iommu/iommu-sva.c | 90
> +++++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/iommu.h     | 32 +++++++++++++++++
>  4 files changed, 133 insertions(+)
>  create mode 100644 drivers/iommu/iommu-sva.c
> 
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index f3a21343e636..555147a61f7c 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -74,6 +74,16 @@ config IOMMU_DMA
>  	select IOMMU_IOVA
>  	select NEED_SG_DMA_LENGTH
> 
> +config IOMMU_SVA
> +	bool "Shared Virtual Addressing API for the IOMMU"
> +	select IOMMU_API
> +	help
> +	  Enable process address space management for the IOMMU API. In
> systems
> +	  that support it, device drivers can bind process address spaces to
> +	  devices and share their page tables using this API.

"their page table" is a bit confusing here.

> +
> +	  If unsure, say N here.
> +
>  config FSL_PAMU
>  	bool "Freescale IOMMU support"
>  	depends on PCI
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index 1fb695854809..1dbcc89ebe4c 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -3,6 +3,7 @@ obj-$(CONFIG_IOMMU_API) += iommu.o
>  obj-$(CONFIG_IOMMU_API) += iommu-traces.o
>  obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o
>  obj-$(CONFIG_IOMMU_DMA) += dma-iommu.o
> +obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
> diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
> new file mode 100644
> index 000000000000..cab5d723520f
> --- /dev/null
> +++ b/drivers/iommu/iommu-sva.c
> @@ -0,0 +1,90 @@
> +/*
> + * Track processes address spaces bound to devices and allocate PASIDs.
> + *
> + * Copyright (C) 2018 ARM Ltd.
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +
> +#include <linux/iommu.h>
> +
> +/**
> + * iommu_sva_device_init() - Initialize Shared Virtual Addressing for a
> device
> + * @dev: the device
> + * @features: bitmask of features that need to be initialized
> + * @max_pasid: max PASID value supported by the device
> + *
> + * Users of the bind()/unbind() API must call this function to initialize all
> + * features required for SVA.
> + *
> + * - If the device should support multiple address spaces (e.g. PCI PASID),
> + *   IOMMU_SVA_FEAT_PASID must be requested.

I think it is by default assumed when using this API, based on definition of
SVA. Can you elaborate the situation where this flag can be cleared?

> + *
> + *   By default the PASID allocated during bind() is limited by the IOMMU
> + *   capacity, and by the device PASID width defined in the PCI capability or
> in
> + *   the firmware description. Setting @max_pasid to a non-zero value
> smaller
> + *   than this limit overrides it.
> + *
> + * - If the device should support I/O Page Faults (e.g. PCI PRI),
> + *   IOMMU_SVA_FEAT_IOPF must be requested.
> + *
> + * The device should not be be performing any DMA while this function is

remove double "be"

> + * running.

"otherwise the behavior is undefined"

> + *
> + * Return 0 if initialization succeeded, or an error.
> + */
> +int iommu_sva_device_init(struct device *dev, unsigned long features,
> +			  unsigned int max_pasid)
> +{
> +	int ret;
> +	unsigned int min_pasid = 0;
> +	struct iommu_param *dev_param = dev->iommu_param;
> +	struct iommu_domain *domain =
> iommu_get_domain_for_dev(dev);
> +
> +	if (!domain || !dev_param || !domain->ops->sva_device_init)
> +		return -ENODEV;
> +
> +	/*
> +	 * IOMMU driver updates the limits depending on the IOMMU and
> device
> +	 * capabilities.
> +	 */
> +	ret = domain->ops->sva_device_init(dev, features, &min_pasid,
> +					   &max_pasid);
> +	if (ret)
> +		return ret;
> +
> +	/* FIXME: racy. Next version should have a mutex (same as fault
> handler) */
> +	dev_param->sva_features = features;
> +	dev_param->min_pasid = min_pasid;
> +	dev_param->max_pasid = max_pasid;

what's the point of min_pasid here?

> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(iommu_sva_device_init);
> +
> +/**
> + * iommu_sva_device_shutdown() - Shutdown Shared Virtual Addressing
> for a device
> + * @dev: the device
> + *
> + * Disable SVA. The device should not be performing any DMA while this
> function
> + * is running.
> + */
> +int iommu_sva_device_shutdown(struct device *dev)
> +{
> +	struct iommu_param *dev_param = dev->iommu_param;
> +	struct iommu_domain *domain =
> iommu_get_domain_for_dev(dev);
> +
> +	if (!domain)
> +		return -ENODEV;
> +
> +	if (domain->ops->sva_device_shutdown)
> +		domain->ops->sva_device_shutdown(dev);
> +
> +	dev_param->sva_features = 0;
> +	dev_param->min_pasid = 0;
> +	dev_param->max_pasid = 0;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(iommu_sva_device_shutdown);
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 66ef406396e9..e9e09eecdece 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -60,6 +60,11 @@ typedef int (*iommu_fault_handler_t)(struct
> iommu_domain *,
>  			struct device *, unsigned long, int, void *);
>  typedef int (*iommu_dev_fault_handler_t)(struct iommu_fault_event *,
> void *);
> 
> +/* Request PASID support */
> +#define IOMMU_SVA_FEAT_PASID		(1 << 0)
> +/* Request I/O page fault support */
> +#define IOMMU_SVA_FEAT_IOPF		(1 << 1)
> +
>  struct iommu_domain_geometry {
>  	dma_addr_t aperture_start; /* First address that can be mapped
> */
>  	dma_addr_t aperture_end;   /* Last address that can be mapped
> */
> @@ -197,6 +202,8 @@ struct page_response_msg {
>   * @domain_free: free iommu domain
>   * @attach_dev: attach device to an iommu domain
>   * @detach_dev: detach device from an iommu domain
> + * @sva_device_init: initialize Shared Virtual Adressing for a device
> + * @sva_device_shutdown: shutdown Shared Virtual Adressing for a
> device
>   * @map: map a physically contiguous memory region to an iommu
> domain
>   * @unmap: unmap a physically contiguous memory region from an
> iommu domain
>   * @map_sg: map a scatter-gather list of physically contiguous memory
> chunks
> @@ -230,6 +237,10 @@ struct iommu_ops {
> 
>  	int (*attach_dev)(struct iommu_domain *domain, struct device
> *dev);
>  	void (*detach_dev)(struct iommu_domain *domain, struct device
> *dev);
> +	int (*sva_device_init)(struct device *dev, unsigned long features,
> +			       unsigned int *min_pasid,
> +			       unsigned int *max_pasid);
> +	void (*sva_device_shutdown)(struct device *dev);
>  	int (*map)(struct iommu_domain *domain, unsigned long iova,
>  		   phys_addr_t paddr, size_t size, int prot);
>  	size_t (*unmap)(struct iommu_domain *domain, unsigned long
> iova,
> @@ -385,6 +396,9 @@ struct iommu_fault_param {
>   */
>  struct iommu_param {
>  	struct iommu_fault_param *fault_param;
> +	unsigned long sva_features;
> +	unsigned int min_pasid;
> +	unsigned int max_pasid;
>  };
> 
>  int  iommu_device_register(struct iommu_device *iommu);
> @@ -878,4 +892,22 @@ const struct iommu_ops
> *iommu_ops_from_fwnode(struct fwnode_handle *fwnode)
> 
>  #endif /* CONFIG_IOMMU_API */
> 
> +#ifdef CONFIG_IOMMU_SVA
> +extern int iommu_sva_device_init(struct device *dev, unsigned long
> features,
> +				 unsigned int max_pasid);
> +extern int iommu_sva_device_shutdown(struct device *dev);
> +#else /* CONFIG_IOMMU_SVA */
> +static inline int iommu_sva_device_init(struct device *dev,
> +					unsigned long features,
> +					unsigned int max_pasid)
> +{
> +	return -ENODEV;
> +}
> +
> +static inline int iommu_sva_device_shutdown(struct device *dev)
> +{
> +	return -ENODEV;
> +}
> +#endif /* CONFIG_IOMMU_SVA */
> +
>  #endif /* __LINUX_IOMMU_H */
> --
> 2.15.1
> 
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tian, Kevin Feb. 13, 2018, 7:54 a.m. UTC | #3
> From: Jean-Philippe Brucker
> Sent: Tuesday, February 13, 2018 2:33 AM
> 
> Add bind() and unbind() operations to the IOMMU API. Device drivers can
> use them to share process page tables with their devices. bind_group()
> is provided for VFIO's convenience, as it needs to provide a coherent
> interface on containers. Other device drivers will most likely want to
> use bind_device(), which binds a single device in the group.

I saw your bind_group implementation tries to bind the address space
for all devices within a group, which IMO has some problem. Based on PCIe
spec, packet routing on the bus doesn't take PASID into consideration. 
since devices within same group cannot be isolated based on requestor-ID
i.e. traffic not guaranteed going to IOMMU, enabling SVA on multiple devices
could cause undesired p2p.
 
If my understanding of PCIe spec is correct, probably we should fail 
calling bind_group()/bind_device() when there are multiple devices within 
the given group. If only one device then bind_group is essentially a wrapper
to bind_device.

> 
> Regardless of the IOMMU group or domain a device is in, device drivers
> should call bind() for each device that will use the PASID.
> 
> This patch only adds skeletons for the device driver API, most of the
> implementation is still missing.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/iommu-sva.c | 105
> ++++++++++++++++++++++++++++++++++++++++++++++
>  drivers/iommu/iommu.c     |  63 ++++++++++++++++++++++++++++
>  include/linux/iommu.h     |  36 ++++++++++++++++
>  3 files changed, 204 insertions(+)
> 
> diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
> index cab5d723520f..593685d891bf 100644
> --- a/drivers/iommu/iommu-sva.c
> +++ b/drivers/iommu/iommu-sva.c
> @@ -9,6 +9,9 @@
> 
>  #include <linux/iommu.h>
> 
> +/* TODO: stub for the fault queue. Remove later. */
> +#define iommu_fault_queue_flush(...)
> +
>  /**
>   * iommu_sva_device_init() - Initialize Shared Virtual Addressing for a
> device
>   * @dev: the device
> @@ -78,6 +81,8 @@ int iommu_sva_device_shutdown(struct device *dev)
>  	if (!domain)
>  		return -ENODEV;
> 
> +	__iommu_sva_unbind_dev_all(dev);
> +
>  	if (domain->ops->sva_device_shutdown)
>  		domain->ops->sva_device_shutdown(dev);
> 
> @@ -88,3 +93,103 @@ int iommu_sva_device_shutdown(struct device
> *dev)
>  	return 0;
>  }
>  EXPORT_SYMBOL_GPL(iommu_sva_device_shutdown);
> +
> +/**
> + * iommu_sva_bind_device() - Bind a process address space to a device
> + * @dev: the device
> + * @mm: the mm to bind, caller must hold a reference to it
> + * @pasid: valid address where the PASID will be stored
> + * @flags: bond properties (IOMMU_SVA_FEAT_*)
> + * @drvdata: private data passed to the mm exit handler
> + *
> + * Create a bond between device and task, allowing the device to access
> the mm
> + * using the returned PASID. A subsequent bind() for the same device and
> mm will
> + * reuse the bond (and return the same PASID), but users will have to call
> + * unbind() twice.

what's the point of requiring unbind twice?

> + *
> + * Callers should have taken care of setting up SVA for this device with
> + * iommu_sva_device_init() beforehand. They may also be notified of the
> bond
> + * disappearing, for example when the last task that uses the mm dies, by
> + * registering a notifier with iommu_register_mm_exit_handler().
> + *
> + * If IOMMU_SVA_FEAT_PASID is requested, a PASID is allocated and
> returned.
> + * TODO: The alternative, binding the non-PASID context to an mm, isn't
> + * supported at the moment because existing IOMMU domain types
> initialize the
> + * non-PASID context for iommu_map()/unmap() or bypass. This requires
> a new
> + * domain type.
> + *
> + * If IOMMU_SVA_FEAT_IOPF is not requested, the caller must pin down
> all
> + * mappings shared with the device. mlock() isn't sufficient, as it doesn't
> + * prevent minor page faults (e.g. copy-on-write). TODO: !IOPF isn't
> allowed at
> + * the moment.
> + *
> + * On success, 0 is returned and @pasid contains a valid ID. Otherwise, an
> error
> + * is returned.
> + */
> +int iommu_sva_bind_device(struct device *dev, struct mm_struct *mm,
> int *pasid,
> +			  unsigned long flags, void *drvdata)
> +{
> +	struct iommu_domain *domain;
> +	struct iommu_param *dev_param = dev->iommu_param;
> +
> +	domain = iommu_get_domain_for_dev(dev);
> +	if (!domain)
> +		return -EINVAL;
> +
> +	if (!pasid)
> +		return -EINVAL;
> +
> +	if (!dev_param || (flags & ~dev_param->sva_features))
> +		return -EINVAL;
> +
> +	if (flags != (IOMMU_SVA_FEAT_PASID | IOMMU_SVA_FEAT_IOPF))
> +		return -EINVAL;
> +
> +	return -ENOSYS; /* TODO */
> +}
> +EXPORT_SYMBOL_GPL(iommu_sva_bind_device);
> +
> +/**
> + * iommu_sva_unbind_device() - Remove a bond created with
> iommu_sva_bind_device
> + * @dev: the device
> + * @pasid: the pasid returned by bind()
> + *
> + * Remove bond between device and address space identified by @pasid.
> Users
> + * should not call unbind() if the corresponding mm exited (as the PASID
> might
> + * have been reallocated to another process.)
> + *
> + * The device must not be issuing any more transaction for this PASID. All
> + * outstanding page requests for this PASID must have been flushed to the
> IOMMU.
> + *
> + * Returns 0 on success, or an error value
> + */
> +int iommu_sva_unbind_device(struct device *dev, int pasid)
> +{
> +	struct iommu_domain *domain;
> +
> +	domain = iommu_get_domain_for_dev(dev);
> +	if (WARN_ON(!domain))
> +		return -EINVAL;
> +
> +	/*
> +	 * Caller stopped the device from issuing PASIDs, now make sure
> they are
> +	 * out of the fault queue.
> +	 */
> +	iommu_fault_queue_flush(dev);
> +
> +	return -ENOSYS; /* TODO */
> +}
> +EXPORT_SYMBOL_GPL(iommu_sva_unbind_device);
> +
> +/**
> + * __iommu_sva_unbind_dev_all() - Detach all address spaces from this
> device
> + *
> + * When detaching @device from a domain, IOMMU drivers should use
> this helper.
> + */
> +void __iommu_sva_unbind_dev_all(struct device *dev)
> +{
> +	iommu_fault_queue_flush(dev);
> +
> +	/* TODO */
> +}
> +EXPORT_SYMBOL_GPL(__iommu_sva_unbind_dev_all);
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index d4a4edaf2d8c..f977851c522b 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -1535,6 +1535,69 @@ void iommu_detach_group(struct
> iommu_domain *domain, struct iommu_group *group)
>  }
>  EXPORT_SYMBOL_GPL(iommu_detach_group);
> 
> +/*
> + * iommu_sva_bind_group() - Share address space with all devices in the
> group.
> + * @group: the iommu group
> + * @mm: the mm to bind
> + * @pasid: valid address where the PASID will be stored
> + * @flags: bond properties (IOMMU_PROCESS_BIND_*)
> + * @drvdata: private data passed to the mm exit handler
> + *
> + * Create a bond between group and process, allowing devices in the
> group to
> + * access the process address space using @pasid.
> + *
> + * Refer to iommu_sva_bind_device() for more details.
> + *
> + * On success, 0 is returned and @pasid contains a valid ID. Otherwise, an
> error
> + * is returned.
> + */
> +int iommu_sva_bind_group(struct iommu_group *group, struct
> mm_struct *mm,
> +			 int *pasid, unsigned long flags, void *drvdata)
> +{
> +	struct group_device *device;
> +	int ret = -ENODEV;
> +
> +	if (!group->domain)
> +		return -EINVAL;
> +
> +	mutex_lock(&group->mutex);
> +	list_for_each_entry(device, &group->devices, list) {
> +		ret = iommu_sva_bind_device(device->dev, mm, pasid,
> flags,
> +					    drvdata);
> +		if (ret)
> +			break;
> +	}
> +
> +	if (ret) {
> +		list_for_each_entry_continue_reverse(device, &group-
> >devices, list)
> +			iommu_sva_unbind_device(device->dev, *pasid);
> +	}
> +	mutex_unlock(&group->mutex);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(iommu_sva_bind_group);
> +
> +/**
> + * iommu_sva_unbind_group() - Remove a bond created with
> iommu_sva_bind_group()
> + * @group: the group
> + * @pasid: the pasid returned by bind
> + *
> + * Refer to iommu_sva_unbind_device() for more details.
> + */
> +int iommu_sva_unbind_group(struct iommu_group *group, int pasid)
> +{
> +	struct group_device *device;
> +
> +	mutex_lock(&group->mutex);
> +	list_for_each_entry(device, &group->devices, list)
> +		iommu_sva_unbind_device(device->dev, pasid);
> +	mutex_unlock(&group->mutex);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(iommu_sva_unbind_group);
> +
>  phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain,
> dma_addr_t iova)
>  {
>  	if (unlikely(domain->ops->iova_to_phys == NULL))
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index e9e09eecdece..1fb10d64b9e5 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -576,6 +576,10 @@ int iommu_fwspec_init(struct device *dev, struct
> fwnode_handle *iommu_fwnode,
>  void iommu_fwspec_free(struct device *dev);
>  int iommu_fwspec_add_ids(struct device *dev, u32 *ids, int num_ids);
>  const struct iommu_ops *iommu_ops_from_fwnode(struct
> fwnode_handle *fwnode);
> +extern int iommu_sva_bind_group(struct iommu_group *group,
> +				struct mm_struct *mm, int *pasid,
> +				unsigned long flags, void *drvdata);
> +extern int iommu_sva_unbind_group(struct iommu_group *group, int
> pasid);
> 
>  #else /* CONFIG_IOMMU_API */
> 
> @@ -890,12 +894,28 @@ const struct iommu_ops
> *iommu_ops_from_fwnode(struct fwnode_handle *fwnode)
>  	return NULL;
>  }
> 
> +static inline int iommu_sva_bind_group(struct iommu_group *group,
> +				       struct mm_struct *mm, int *pasid,
> +				       unsigned long flags, void *drvdata)
> +{
> +	return -ENODEV;
> +}
> +
> +static inline int iommu_sva_unbind_group(struct iommu_group *group,
> int pasid)
> +{
> +	return -ENODEV;
> +}
> +
>  #endif /* CONFIG_IOMMU_API */
> 
>  #ifdef CONFIG_IOMMU_SVA
>  extern int iommu_sva_device_init(struct device *dev, unsigned long
> features,
>  				 unsigned int max_pasid);
>  extern int iommu_sva_device_shutdown(struct device *dev);
> +extern int iommu_sva_bind_device(struct device *dev, struct mm_struct
> *mm,
> +				int *pasid, unsigned long flags, void
> *drvdata);
> +extern int iommu_sva_unbind_device(struct device *dev, int pasid);
> +extern void __iommu_sva_unbind_dev_all(struct device *dev);
>  #else /* CONFIG_IOMMU_SVA */
>  static inline int iommu_sva_device_init(struct device *dev,
>  					unsigned long features,
> @@ -908,6 +928,22 @@ static inline int
> iommu_sva_device_shutdown(struct device *dev)
>  {
>  	return -ENODEV;
>  }
> +
> +static inline int iommu_sva_bind_device(struct device *dev,
> +					struct mm_struct *mm, int *pasid,
> +					unsigned long flags, void *drvdata)
> +{
> +	return -ENODEV;
> +}
> +
> +static inline int iommu_sva_unbind_device(struct device *dev, int pasid)
> +{
> +	return -ENODEV;
> +}
> +
> +static inline void __iommu_sva_unbind_dev_all(struct device *dev)
> +{
> +}
>  #endif /* CONFIG_IOMMU_SVA */
> 
>  #endif /* __LINUX_IOMMU_H */
> --
> 2.15.1

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tian, Kevin Feb. 13, 2018, 8:11 a.m. UTC | #4
> From: Jean-Philippe Brucker
> Sent: Tuesday, February 13, 2018 2:33 AM
> 
> When an mm exits, devices that were bound to it must stop performing
> DMA
> on its PASID. Let device drivers register a callback to be notified on mm
> exit. Add the callback to the iommu_param structure attached to struct
> device.

what about registering the callback in sva_device_init? 

> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/iommu-sva.c | 54
> +++++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/iommu.h     | 18 ++++++++++++++++
>  2 files changed, 72 insertions(+)
> 
> diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
> index f9af9d66b3ed..90b524c99d3d 100644
> --- a/drivers/iommu/iommu-sva.c
> +++ b/drivers/iommu/iommu-sva.c
> @@ -569,3 +569,57 @@ void __iommu_sva_unbind_dev_all(struct device
> *dev)
>  	spin_unlock(&iommu_sva_lock);
>  }
>  EXPORT_SYMBOL_GPL(__iommu_sva_unbind_dev_all);
> +
> +/**
> + * iommu_register_mm_exit_handler() - Set a callback for mm exit
> + * @dev: the device
> + * @handler: exit handler
> + *
> + * Users of the bind/unbind API should call this function to set a
> + * device-specific callback telling them when a mm is exiting.
> + *
> + * After the callback returns, the device must not issue any more
> transaction
> + * with the PASID given as argument to the handler. In addition the
> handler gets
> + * an opaque pointer corresponding to the drvdata passed as argument of
> bind().
> + *
> + * The handler itself should return 0 on success, and an appropriate error
> code
> + * otherwise.
> + */
> +int iommu_register_mm_exit_handler(struct device *dev,
> +				   iommu_mm_exit_handler_t handler)
> +{
> +	struct iommu_param *dev_param = dev->iommu_param;
> +
> +	if (!dev_param)
> +		return -EINVAL;
> +
> +	/*
> +	 * FIXME: racy. Same as iommu_sva_device_init, but here we'll
> need a
> +	 * spinlock to call the mm_exit param from atomic context.
> +	 */
> +	if (dev_param->mm_exit)
> +		return -EBUSY;
> +
> +	get_device(dev);
> +	dev_param->mm_exit = handler;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(iommu_register_mm_exit_handler);
> +
> +/**
> + * iommu_unregister_mm_exit_handler() - Remove mm exit callback
> + */
> +int iommu_unregister_mm_exit_handler(struct device *dev)
> +{
> +	struct iommu_param *dev_param = dev->iommu_param;
> +
> +	if (!dev_param || !dev_param->mm_exit)
> +		return -EINVAL;
> +
> +	dev_param->mm_exit = NULL;
> +	put_device(dev);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(iommu_unregister_mm_exit_handler);
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 09d85f44142a..1b1a16892ac1 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -65,6 +65,8 @@ typedef int (*iommu_dev_fault_handler_t)(struct
> iommu_fault_event *, void *);
>  /* Request I/O page fault support */
>  #define IOMMU_SVA_FEAT_IOPF		(1 << 1)
> 
> +typedef int (*iommu_mm_exit_handler_t)(struct device *dev, int pasid,
> void *);
> +
>  struct iommu_domain_geometry {
>  	dma_addr_t aperture_start; /* First address that can be mapped
> */
>  	dma_addr_t aperture_end;   /* Last address that can be mapped
> */
> @@ -424,6 +426,7 @@ struct iommu_param {
>  	unsigned int min_pasid;
>  	unsigned int max_pasid;
>  	struct list_head mm_list;
> +	iommu_mm_exit_handler_t mm_exit;
>  };
> 
>  int  iommu_device_register(struct iommu_device *iommu);
> @@ -941,6 +944,10 @@ extern int iommu_sva_bind_device(struct device
> *dev, struct mm_struct *mm,
>  				int *pasid, unsigned long flags, void
> *drvdata);
>  extern int iommu_sva_unbind_device(struct device *dev, int pasid);
>  extern void __iommu_sva_unbind_dev_all(struct device *dev);
> +extern int iommu_register_mm_exit_handler(struct device *dev,
> +					  iommu_mm_exit_handler_t
> handler);
> +extern int iommu_unregister_mm_exit_handler(struct device *dev);
> +
>  #else /* CONFIG_IOMMU_SVA */
>  static inline int iommu_sva_device_init(struct device *dev,
>  					unsigned long features,
> @@ -969,6 +976,17 @@ static inline int iommu_sva_unbind_device(struct
> device *dev, int pasid)
>  static inline void __iommu_sva_unbind_dev_all(struct device *dev)
>  {
>  }
> +
> +static inline int iommu_register_mm_exit_handler(struct device *dev,
> +						 iommu_mm_exit_handler_t
> handler)
> +{
> +	return -ENODEV;
> +}
> +
> +static inline int iommu_unregister_mm_exit_handler(struct device *dev)
> +{
> +	return -ENODEV;
> +}
>  #endif /* CONFIG_IOMMU_SVA */
> 
>  #endif /* __LINUX_IOMMU_H */
> --
> 2.15.1

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker Feb. 13, 2018, 12:40 p.m. UTC | #5
Hi Kevin,

Thanks for taking a look!

On 13/02/18 07:31, Tian, Kevin wrote:
>> From: Jean-Philippe Brucker
>> Sent: Tuesday, February 13, 2018 2:33 AM
>>
>> Shared Virtual Addressing (SVA) provides a way for device drivers to bind
>> process address spaces to devices. This requires the IOMMU to support the
>> same page table format as CPUs, and requires the system to support I/O
> 
> "same" is a bit restrictive. "compatible" is better as you used in coverletter. :-)

Indeed

[..]
>> +config IOMMU_SVA
>> +	bool "Shared Virtual Addressing API for the IOMMU"
>> +	select IOMMU_API
>> +	help
>> +	  Enable process address space management for the IOMMU API. In
>> systems
>> +	  that support it, device drivers can bind process address spaces to
>> +	  devices and share their page tables using this API.
> 
> "their page table" is a bit confusing here.

Maybe this is sufficient:
"In systems that support it, drivers can share process address spaces with
their devices using this API."

[...]
>> +
>> +/**
>> + * iommu_sva_device_init() - Initialize Shared Virtual Addressing for a
>> device
>> + * @dev: the device
>> + * @features: bitmask of features that need to be initialized
>> + * @max_pasid: max PASID value supported by the device
>> + *
>> + * Users of the bind()/unbind() API must call this function to initialize all
>> + * features required for SVA.
>> + *
>> + * - If the device should support multiple address spaces (e.g. PCI PASID),
>> + *   IOMMU_SVA_FEAT_PASID must be requested.
> 
> I think it is by default assumed when using this API, based on definition of
> SVA. Can you elaborate the situation where this flag can be cleared?

When passing a device to userspace, you could also share its non-pasid
address space with the process. It requires a new domain type so is left
as a TODO in patch 2/37. I did get requests for this feature, though I
think it was mostly for prototyping. I guess I could remove the flag, and
reintroduce it as IOMMU_SVA_FEAT_NO_PASID later on.

>> + *
>> + *   By default the PASID allocated during bind() is limited by the IOMMU
>> + *   capacity, and by the device PASID width defined in the PCI capability or
>> in
>> + *   the firmware description. Setting @max_pasid to a non-zero value
>> smaller
>> + *   than this limit overrides it.
>> + *
>> + * - If the device should support I/O Page Faults (e.g. PCI PRI),
>> + *   IOMMU_SVA_FEAT_IOPF must be requested.
>> + *
>> + * The device should not be be performing any DMA while this function is
> 
> remove double "be"
> 
>> + * running.
> 
> "otherwise the behavior is undefined"

ok

[...]
>> +	ret = domain->ops->sva_device_init(dev, features, &min_pasid,
>> +					   &max_pasid);
>> +	if (ret)
>> +		return ret;
>> +
>> +	/* FIXME: racy. Next version should have a mutex (same as fault
>> handler) */
>> +	dev_param->sva_features = features;
>> +	dev_param->min_pasid = min_pasid;
>> +	dev_param->max_pasid = max_pasid;
> 
> what's the point of min_pasid here?

Arm SMMUv3 uses entry 0 of the PASID table for the default (non-pasid)
context, so it needs to set min_pasid to 1. AMD IOMMU recently added a
similar feature (GIoSup), if I understood correctly.

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker Feb. 13, 2018, 12:57 p.m. UTC | #6
On 13/02/18 07:54, Tian, Kevin wrote:
>> From: Jean-Philippe Brucker
>> Sent: Tuesday, February 13, 2018 2:33 AM
>>
>> Add bind() and unbind() operations to the IOMMU API. Device drivers can
>> use them to share process page tables with their devices. bind_group()
>> is provided for VFIO's convenience, as it needs to provide a coherent
>> interface on containers. Other device drivers will most likely want to
>> use bind_device(), which binds a single device in the group.
> 
> I saw your bind_group implementation tries to bind the address space
> for all devices within a group, which IMO has some problem. Based on PCIe
> spec, packet routing on the bus doesn't take PASID into consideration. 
> since devices within same group cannot be isolated based on requestor-ID
> i.e. traffic not guaranteed going to IOMMU, enabling SVA on multiple devices
> could cause undesired p2p.
But so does enabling "classic" DMA... If two devices are not protected by
ACS for example, they are put in the same IOMMU group, and one device
might be able to snoop the other's DMA. VFIO allows userspace to create a
container for them and use MAP/UNMAP, but makes it explicit to the user
that for DMA, these devices are not isolated and must be considered as a
single device (you can't pass them to different VMs or put them in
different containers). So I tried to keep the same idea as MAP/UNMAP for
SVA, performing BIND/UNBIND operations on the VFIO container instead of
the device.

I kept the analogy simple though, because I don't think there will be many
SVA-capable systems that require IOMMU groups. They will likely implement
proper device isolation. Unlike iommu_attach_device(), bind_device()
doesn't call bind_group(), because keeping bonds consistent in groups is
complicated, not worth implementing (drivers can explicitly bind() all
devices that need it) and probably wouldn't ever be used. I also can't
test it. But maybe we could implement the following for now:

* bind_device() fails if the device's group has more than one device,
otherwise calls __bind_device(). This prevents device drivers that are
oblivious to IOMMU groups from opening a backdoor.

* bind_group() calls __bind_device() for all devices in group. This way
users that are aware of IOMMU groups can still use them safely. Note that
at the moment bind_group() fails as soon as it finds a device that doesn't
support SVA. Having all devices support SVA in a given group is
unrealistic and this behavior ought to be improved.

* hotplugging a device into a group still succeeds even if the group
already has mm bonds. Same happens for classic DMA, a hotplugged device
will have access to all mappings already present in the domain.

> If my understanding of PCIe spec is correct, probably we should fail 
> calling bind_group()/bind_device() when there are multiple devices within 
> the given group. If only one device then bind_group is essentially a wrapper
> to bind_device.>>
>> Regardless of the IOMMU group or domain a device is in, device drivers
>> should call bind() for each device that will use the PASID.
>>
[...]
>> +/**
>> + * iommu_sva_bind_device() - Bind a process address space to a device
>> + * @dev: the device
>> + * @mm: the mm to bind, caller must hold a reference to it
>> + * @pasid: valid address where the PASID will be stored
>> + * @flags: bond properties (IOMMU_SVA_FEAT_*)
>> + * @drvdata: private data passed to the mm exit handler
>> + *
>> + * Create a bond between device and task, allowing the device to access
>> the mm
>> + * using the returned PASID. A subsequent bind() for the same device and
>> mm will
>> + * reuse the bond (and return the same PASID), but users will have to call
>> + * unbind() twice.
> 
> what's the point of requiring unbind twice?

Mmh, that was necessary when we kept bond information as domain<->mm, but
since it's now device<->mm, we can probably remove the bond refcount. I
consider that a bind() between a given device and mm will always be issued
by the same driver.

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker Feb. 13, 2018, 12:57 p.m. UTC | #7
On 13/02/18 08:11, Tian, Kevin wrote:
>> From: Jean-Philippe Brucker
>> Sent: Tuesday, February 13, 2018 2:33 AM
>>
>> When an mm exits, devices that were bound to it must stop performing
>> DMA
>> on its PASID. Let device drivers register a callback to be notified on mm
>> exit. Add the callback to the iommu_param structure attached to struct
>> device.
> 
> what about registering the callback in sva_device_init? 

I don't have a preference. This way it look like
iommu_register_device_fault_handler, but adding the callback to
sva_device_init makes sense too.

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker Feb. 13, 2018, 12:58 p.m. UTC | #8
Hi,

On 13/02/18 01:46, Xu Zaibo wrote:
> Hi,
> 
> On 2018/2/13 2:33, Jean-Philippe Brucker wrote:
>> The SMMU provides a Stall model for handling page faults in platform
>> devices. It is similar to PCI PRI, but doesn't require devices to have
>> their own translation cache. Instead, faulting transactions are parked and
>> the OS is given a chance to fix the page tables and retry the transaction.
>>
>> Enable stall for devices that support it (opt-in by firmware). When an
>> event corresponds to a translation error, call the IOMMU fault handler. If
>> the fault is recoverable, it will call us back to terminate or continue
>> the stall.
>>
>> Note that this patch tweaks the iommu_fault_event and page_response_msg to
>> extend the fault id field. Stall uses 16 bits of IDs whereas PCI PRI only
>> uses 9.
> For PCIe devices without ATC,  can they use this Stall model?

Unfortunately no, Stall it is incompatible with PCI. Timing constraints in
PCI prevent from stalling transactions in the IOMMU.

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tian, Kevin Feb. 13, 2018, 11:34 p.m. UTC | #9
> From: Jean-Philippe Brucker
> Sent: Tuesday, February 13, 2018 8:57 PM
> 
> On 13/02/18 07:54, Tian, Kevin wrote:
> >> From: Jean-Philippe Brucker
> >> Sent: Tuesday, February 13, 2018 2:33 AM
> >>
> >> Add bind() and unbind() operations to the IOMMU API. Device drivers
> can
> >> use them to share process page tables with their devices. bind_group()
> >> is provided for VFIO's convenience, as it needs to provide a coherent
> >> interface on containers. Other device drivers will most likely want to
> >> use bind_device(), which binds a single device in the group.
> >
> > I saw your bind_group implementation tries to bind the address space
> > for all devices within a group, which IMO has some problem. Based on
> PCIe
> > spec, packet routing on the bus doesn't take PASID into consideration.
> > since devices within same group cannot be isolated based on requestor-
> ID
> > i.e. traffic not guaranteed going to IOMMU, enabling SVA on multiple
> devices
> > could cause undesired p2p.
> But so does enabling "classic" DMA... If two devices are not protected by
> ACS for example, they are put in the same IOMMU group, and one device
> might be able to snoop the other's DMA. VFIO allows userspace to create a
> container for them and use MAP/UNMAP, but makes it explicit to the user
> that for DMA, these devices are not isolated and must be considered as a
> single device (you can't pass them to different VMs or put them in
> different containers). So I tried to keep the same idea as MAP/UNMAP for
> SVA, performing BIND/UNBIND operations on the VFIO container instead of
> the device.

there is a small difference. for classic DMA we can reserve PCI BARs 
when allocating IOVA, thus multiple devices in the same group can 
still work correctly applied with same translation, if isolation is not
cared in between. However for SVA it's CPU virtual addresses 
managed by kernel mm thus difficult to introduce similar address 
reservation. Then it's possible for a VA falling into other device's 
BAR in the same group and cause undesired p2p traffic. In such 
regard, SVA is actually functionally-broken.

> 
> I kept the analogy simple though, because I don't think there will be many
> SVA-capable systems that require IOMMU groups. They will likely

I agree that multiple SVA-capable devices in same IOMMU group is not
a typical configuration, especially it's usually observed on new devices.
Then based on above limitation, I think we could just explicitly avoid
enabling SVA in such case. :-)

> implement
> proper device isolation. Unlike iommu_attach_device(), bind_device()
> doesn't call bind_group(), because keeping bonds consistent in groups is
> complicated, not worth implementing (drivers can explicitly bind() all
> devices that need it) and probably wouldn't ever be used. I also can't
> test it. But maybe we could implement the following for now:
> 
> * bind_device() fails if the device's group has more than one device,
> otherwise calls __bind_device(). This prevents device drivers that are
> oblivious to IOMMU groups from opening a backdoor.
> 
> * bind_group() calls __bind_device() for all devices in group. This way
> users that are aware of IOMMU groups can still use them safely. Note that
> at the moment bind_group() fails as soon as it finds a device that doesn't
> support SVA. Having all devices support SVA in a given group is
> unrealistic and this behavior ought to be improved.
> 
> * hotplugging a device into a group still succeeds even if the group
> already has mm bonds. Same happens for classic DMA, a hotplugged
> device
> will have access to all mappings already present in the domain.
> 
> > If my understanding of PCIe spec is correct, probably we should fail
> > calling bind_group()/bind_device() when there are multiple devices within
> > the given group. If only one device then bind_group is essentially a
> wrapper
> > to bind_device.>>
> >> Regardless of the IOMMU group or domain a device is in, device drivers
> >> should call bind() for each device that will use the PASID.
> >>
> [...]
> >> +/**
> >> + * iommu_sva_bind_device() - Bind a process address space to a device
> >> + * @dev: the device
> >> + * @mm: the mm to bind, caller must hold a reference to it
> >> + * @pasid: valid address where the PASID will be stored
> >> + * @flags: bond properties (IOMMU_SVA_FEAT_*)
> >> + * @drvdata: private data passed to the mm exit handler
> >> + *
> >> + * Create a bond between device and task, allowing the device to access
> >> the mm
> >> + * using the returned PASID. A subsequent bind() for the same device
> and
> >> mm will
> >> + * reuse the bond (and return the same PASID), but users will have to
> call
> >> + * unbind() twice.
> >
> > what's the point of requiring unbind twice?
> 
> Mmh, that was necessary when we kept bond information as domain<-
> >mm, but
> since it's now device<->mm, we can probably remove the bond refcount. I
> consider that a bind() between a given device and mm will always be issued
> by the same driver.
> 
> Thanks,
> Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tian, Kevin Feb. 13, 2018, 11:43 p.m. UTC | #10
> From: Jean-Philippe Brucker
> Sent: Tuesday, February 13, 2018 8:40 PM
> 
> 
> [...]
> >> +
> >> +/**
> >> + * iommu_sva_device_init() - Initialize Shared Virtual Addressing for a
> >> device
> >> + * @dev: the device
> >> + * @features: bitmask of features that need to be initialized
> >> + * @max_pasid: max PASID value supported by the device
> >> + *
> >> + * Users of the bind()/unbind() API must call this function to initialize all
> >> + * features required for SVA.
> >> + *
> >> + * - If the device should support multiple address spaces (e.g. PCI
> PASID),
> >> + *   IOMMU_SVA_FEAT_PASID must be requested.
> >
> > I think it is by default assumed when using this API, based on definition of
> > SVA. Can you elaborate the situation where this flag can be cleared?
> 
> When passing a device to userspace, you could also share its non-pasid
> address space with the process. It requires a new domain type so is left
> as a TODO in patch 2/37. I did get requests for this feature, though I
> think it was mostly for prototyping. I guess I could remove the flag, and
> reintroduce it as IOMMU_SVA_FEAT_NO_PASID later on.

sorry I still didn't get the definition of non-pasid address space. 
Did you mean the GPA/IOVA address space and no_pasid implies
actually some default PASID associated?

> 
> [...]
> >> +	ret = domain->ops->sva_device_init(dev, features, &min_pasid,
> >> +					   &max_pasid);
> >> +	if (ret)
> >> +		return ret;
> >> +
> >> +	/* FIXME: racy. Next version should have a mutex (same as fault
> >> handler) */
> >> +	dev_param->sva_features = features;
> >> +	dev_param->min_pasid = min_pasid;
> >> +	dev_param->max_pasid = max_pasid;
> >
> > what's the point of min_pasid here?
> 
> Arm SMMUv3 uses entry 0 of the PASID table for the default (non-pasid)
> context, so it needs to set min_pasid to 1. AMD IOMMU recently added a
> similar feature (GIoSup), if I understood correctly.
> 

just for such purpose maybe we should just define a reserved_pasid
otherwise there will be some waste if an implementation allows it
non-zero.

Thanks
Kevin
 
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jacob Pan Feb. 14, 2018, 7:18 a.m. UTC | #11
On Mon, 12 Feb 2018 18:33:22 +0000
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:

> Some systems allow devices to handle IOMMU translation faults in the
> core mm. For example systems supporting the PCI PRI extension or Arm
> SMMU stall model. Infrastructure for reporting such recoverable page
> faults was recently added to the IOMMU core, for SVA virtualization.
> Extend iommu_report_device_fault() to handle host page faults as well.
> 
> * IOMMU drivers instantiate a fault workqueue, using
>   iommu_fault_queue_init() and iommu_fault_queue_destroy().
> 
> * When it receives a fault event, supposedly in an IRQ handler, the
> IOMMU driver reports the fault using iommu_report_device_fault()
> 
> * If the device driver registered a handler (e.g. VFIO), pass down the
>   fault event. Otherwise submit it to the fault queue, to be handled
> in a thread.
> 
> * When the fault corresponds to an io_mm, call the mm fault handler
> on it (in next patch).
> 
> * Once the fault is handled, the mm wrapper or the device driver
> reports success of failure with iommu_page_response(). The
> translation is either retried or aborted, depending on the response
> code.
> 
Hi Jean,
Seems a good approach to consolidate page fault. I will try to test
intel-svm code with this flow. more comments inline.
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/Kconfig      |  10 ++
>  drivers/iommu/Makefile     |   1 +
>  drivers/iommu/io-pgfault.c | 282
> +++++++++++++++++++++++++++++++++++++++++++++
> drivers/iommu/iommu-sva.c  |   3 - drivers/iommu/iommu.c      |  31
> ++--- include/linux/iommu.h      |  34 +++++-
>  6 files changed, 339 insertions(+), 22 deletions(-)
>  create mode 100644 drivers/iommu/io-pgfault.c
> 
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index 146eebe9a4bb..e751bb9958ba 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -85,6 +85,15 @@ config IOMMU_SVA
>  
>  	  If unsure, say N here.
>  
> +config IOMMU_FAULT
> +	bool "Fault handler for the IOMMU API"
> +	select IOMMU_API
> +	help
> +	  Enable the generic fault handler for the IOMMU API, that
> handles
> +	  recoverable page faults or inject them into guests.
> +
> +	  If unsure, say N here.
> +
>  config FSL_PAMU
>  	bool "Freescale IOMMU support"
>  	depends on PCI
> @@ -156,6 +165,7 @@ config INTEL_IOMMU
>  	select IOMMU_API
>  	select IOMMU_IOVA
>  	select DMAR_TABLE
> +	select IOMMU_FAULT
>  	help
>  	  DMA remapping (DMAR) devices support enables independent
> address translations for Direct Memory Access (DMA) from devices.
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index 1dbcc89ebe4c..f4324e29035e 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -4,6 +4,7 @@ obj-$(CONFIG_IOMMU_API) += iommu-traces.o
>  obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o
>  obj-$(CONFIG_IOMMU_DMA) += dma-iommu.o
>  obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o
> +obj-$(CONFIG_IOMMU_FAULT) += io-pgfault.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
> diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
> new file mode 100644
> index 000000000000..33309ed316d2
> --- /dev/null
> +++ b/drivers/iommu/io-pgfault.c
> @@ -0,0 +1,282 @@
> +/*
> + * Handle device page faults
> + *
> + * Copyright (C) 2018 ARM Ltd.
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +
> +#include <linux/iommu.h>
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <linux/workqueue.h>
> +
> +static struct workqueue_struct *iommu_fault_queue;
> +static DECLARE_RWSEM(iommu_fault_queue_sem);
> +static refcount_t iommu_fault_queue_refs = REFCOUNT_INIT(0);
> +static BLOCKING_NOTIFIER_HEAD(iommu_fault_queue_flush_notifiers);
> +
> +/* Used to store incomplete fault groups */
> +static LIST_HEAD(iommu_partial_faults);
> +static DEFINE_SPINLOCK(iommu_partial_faults_lock);
> +
should partial fault list be per iommu?
> +struct iommu_fault_context {
> +	struct device			*dev;
> +	struct iommu_fault_event	evt;
> +	struct list_head		head;
> +};
> +
> +struct iommu_fault_group {
> +	struct iommu_domain		*domain;
> +	struct iommu_fault_context	last_fault;
> +	struct list_head		faults;
> +	struct work_struct		work;
> +};
> +
> +/*
> + * iommu_fault_complete() - Finish handling a fault
> + *
> + * Send a response if necessary and pass on the sanitized status code
> + */
> +static int iommu_fault_complete(struct iommu_domain *domain, struct
> device *dev,
> +				struct iommu_fault_event *evt, int
> status) +{
> +	struct page_response_msg resp = {
> +		.addr		= evt->addr,
> +		.pasid		= evt->pasid,
> +		.pasid_present	= evt->pasid_valid,
> +		.page_req_group_id = evt->page_req_group_id,
> +		.type		= IOMMU_PAGE_GROUP_RESP,
> +		.private_data	= evt->iommu_private,
> +	};
> +
> +	/*
> +	 * There is no "handling" an unrecoverable fault, so the
> only valid
> +	 * return values are 0 or an error.
> +	 */
> +	if (evt->type == IOMMU_FAULT_DMA_UNRECOV)
> +		return status > 0 ? 0 : status;
> +
> +	/* Someone took ownership of the fault and will complete it
> later */
> +	if (status == IOMMU_PAGE_RESP_HANDLED)
> +		return 0;
> +
> +	/*
> +	 * There was an internal error with handling the recoverable
> fault. Try
> +	 * to complete the fault if possible.
> +	 */
> +	if (status < 0)
> +		status = IOMMU_PAGE_RESP_INVALID;
> +
> +	if (WARN_ON(!domain->ops->page_response))
> +		/*
> +		 * The IOMMU driver shouldn't have submitted
> recoverable faults
> +		 * if it cannot receive a response.
> +		 */
> +		return -EINVAL;
> +
> +	resp.resp_code = status;
> +	return domain->ops->page_response(domain, dev, &resp);
> +}
> +
> +static int iommu_fault_handle_single(struct iommu_fault_context
> *fault) +{
> +	/* TODO */
> +	return -ENODEV;
> +}
> +
> +static void iommu_fault_handle_group(struct work_struct *work)
> +{
> +	struct iommu_fault_group *group;
> +	struct iommu_fault_context *fault, *next;
> +	int status = IOMMU_PAGE_RESP_SUCCESS;
> +
> +	group = container_of(work, struct iommu_fault_group, work);
> +
> +	list_for_each_entry_safe(fault, next, &group->faults, head) {
> +		struct iommu_fault_event *evt = &fault->evt;
> +		/*
> +		 * Errors are sticky: don't handle subsequent faults
> in the
> +		 * group if there is an error.
> +		 */
> +		if (status == IOMMU_PAGE_RESP_SUCCESS)
> +			status = iommu_fault_handle_single(fault);
> +
> +		if (!evt->last_req)
> +			kfree(fault);
> +	}
> +
> +	iommu_fault_complete(group->domain, group->last_fault.dev,
> +			     &group->last_fault.evt, status);
> +	kfree(group);
> +}
> +
> +static int iommu_queue_fault(struct iommu_domain *domain, struct
> device *dev,
> +			     struct iommu_fault_event *evt)
> +{
> +	struct iommu_fault_group *group;
> +	struct iommu_fault_context *fault, *next;
> +
> +	if (!iommu_fault_queue)
> +		return -ENOSYS;
> +
> +	if (!evt->last_req) {
> +		fault = kzalloc(sizeof(*fault), GFP_KERNEL);
> +		if (!fault)
> +			return -ENOMEM;
> +
> +		fault->evt = *evt;
> +		fault->dev = dev;
> +
> +		/* Non-last request of a group. Postpone until the
> last one */
> +		spin_lock(&iommu_partial_faults_lock);
> +		list_add_tail(&fault->head, &iommu_partial_faults);
> +		spin_unlock(&iommu_partial_faults_lock);
> +
> +		return IOMMU_PAGE_RESP_HANDLED;
> +	}
> +
> +	group = kzalloc(sizeof(*group), GFP_KERNEL);
> +	if (!group)
> +		return -ENOMEM;
> +
> +	group->last_fault.evt = *evt;
> +	group->last_fault.dev = dev;
> +	group->domain = domain;
> +	INIT_LIST_HEAD(&group->faults);
> +	list_add(&group->last_fault.head, &group->faults);
> +	INIT_WORK(&group->work, iommu_fault_handle_group);
> +
> +	/* See if we have pending faults for this group */
> +	spin_lock(&iommu_partial_faults_lock);
> +	list_for_each_entry_safe(fault, next, &iommu_partial_faults,
> head) {
> +		if (fault->evt.page_req_group_id ==
> evt->page_req_group_id &&
> +		    fault->dev == dev) {
> +			list_del(&fault->head);
> +			/* Insert *before* the last fault */
> +			list_add(&fault->head, &group->faults);
> +		}
> +	}
> +	spin_unlock(&iommu_partial_faults_lock);
> +
> +	queue_work(iommu_fault_queue, &group->work);
> +
> +	/* Postpone the fault completion */
> +	return IOMMU_PAGE_RESP_HANDLED;
> +}
> +
> +/**
> + * iommu_report_device_fault() - Handle fault in device driver or mm
> + *
> + * If the device driver expressed interest in handling fault, report
> it through
> + * the callback. If the fault is recoverable, try to page in the
> address.
> + */
> +int iommu_report_device_fault(struct device *dev, struct
> iommu_fault_event *evt) +{
> +	int ret = -ENOSYS;
> +	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
> +
> +	if (!domain)
> +		return -ENODEV;
> +
> +	/*
> +	 * if upper layers showed interest and installed a fault
> handler,
> +	 * invoke it.
> +	 */
> +	if (iommu_has_device_fault_handler(dev)) {
I think Alex pointed out this is racy, so adding a mutex to the
iommu_fault_param and acquire it would help. Do we really
atomic handler?
> +		struct iommu_fault_param *param =
> dev->iommu_param->fault_param; +
> +		return param->handler(evt, param->data);
Even upper layer (VFIO) registered handler to propagate PRQ to a guest
to fault in the pages, we may still need to keep track of the page
requests that need page response later, i.e. last page in group or
stream request in vt-d. This will allow us sanitize the page response
come back from the guest/VFIO.
In my next round, I am adding a per device list under iommu_fault_param
for pending page request. This will also address the situation where
guest failed to send response. We can enforce time or credit limit of
pending requests based on this list.

> +	}
> +
> +	/* If the handler is blocking, handle fault in the workqueue
> */
> +	if (evt->type == IOMMU_FAULT_PAGE_REQ)
> +		ret = iommu_queue_fault(domain, dev, evt);
> +
> +	return iommu_fault_complete(domain, dev, evt, ret);
> +}
> +EXPORT_SYMBOL_GPL(iommu_report_device_fault);
> +
> +/**
> + * iommu_fault_queue_register() - register an IOMMU driver to the
> fault queue
> + * @flush_notifier: a notifier block that is called before the fault
> queue is
> + * flushed. The IOMMU driver should commit all faults that are
> pending in its
> + * low-level queues at the time of the call, into the fault queue.
> The notifier
> + * takes a device pointer as argument, hinting what endpoint is
> causing the
> + * flush. When the device is NULL, all faults should be committed.
> + */
> +int iommu_fault_queue_register(struct notifier_block *flush_notifier)
> +{
> +	/*
> +	 * The WQ is unordered because the low-level handler
> enqueues faults by
> +	 * group. PRI requests within a group have to be ordered,
> but once
> +	 * that's dealt with, the high-level function can handle
> groups out of
> +	 * order.
> +	 */
> +	down_write(&iommu_fault_queue_sem);
> +	if (!iommu_fault_queue) {
> +		iommu_fault_queue =
> alloc_workqueue("iommu_fault_queue",
> +						    WQ_UNBOUND, 0);
> +		if (iommu_fault_queue)
> +			refcount_set(&iommu_fault_queue_refs, 1);
> +	} else {
> +		refcount_inc(&iommu_fault_queue_refs);
> +	}
> +	up_write(&iommu_fault_queue_sem);
> +
> +	if (!iommu_fault_queue)
> +		return -ENOMEM;
> +
> +	if (flush_notifier)
> +
> blocking_notifier_chain_register(&iommu_fault_queue_flush_notifiers,
> +						 flush_notifier);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(iommu_fault_queue_register);
> +
> +/**
> + * iommu_fault_queue_flush() - Ensure that all queued faults have
> been
> + * processed.
> + * @dev: the endpoint whose faults need to be flushed. If NULL,
> flush all
> + *       pending faults.
> + *
> + * Users must call this function when releasing a PASID, to ensure
> that all
> + * pending faults affecting this PASID have been handled, and won't
> affect the
> + * address space of a subsequent process that reuses this PASID.
> + */
> +void iommu_fault_queue_flush(struct device *dev)
> +{
> +
> blocking_notifier_call_chain(&iommu_fault_queue_flush_notifiers, 0,
> dev); +
> +	down_read(&iommu_fault_queue_sem);
> +	/*
> +	 * Don't flush the partial faults list. All PRGs with the
> PASID are
> +	 * complete and have been submitted to the queue.
> +	 */
> +	if (iommu_fault_queue)
> +		flush_workqueue(iommu_fault_queue);
> +	up_read(&iommu_fault_queue_sem);
> +}
> +EXPORT_SYMBOL_GPL(iommu_fault_queue_flush);
> +
> +/**
> + * iommu_fault_queue_unregister() - Unregister an IOMMU driver from
> the fault
> + * queue.
> + * @flush_notifier: same parameter as iommu_fault_queue_register
> + */
> +void iommu_fault_queue_unregister(struct notifier_block
> *flush_notifier) +{
> +	down_write(&iommu_fault_queue_sem);
> +	if (refcount_dec_and_test(&iommu_fault_queue_refs)) {
> +		destroy_workqueue(iommu_fault_queue);
> +		iommu_fault_queue = NULL;
> +	}
> +	up_write(&iommu_fault_queue_sem);
> +
> +	if (flush_notifier)
> +
> blocking_notifier_chain_unregister(&iommu_fault_queue_flush_notifiers,
> +						   flush_notifier);
> +}
> +EXPORT_SYMBOL_GPL(iommu_fault_queue_unregister);
> diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
> index 4bc2a8c12465..d7b231cd7355 100644
> --- a/drivers/iommu/iommu-sva.c
> +++ b/drivers/iommu/iommu-sva.c
> @@ -102,9 +102,6 @@
>   * the device table and PASID 0 would be available to the allocator.
>   */
>  
> -/* TODO: stub for the fault queue. Remove later. */
> -#define iommu_fault_queue_flush(...)
> -
>  struct iommu_bond {
>  	struct io_mm		*io_mm;
>  	struct device		*dev;
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 1d60b32a6744..c475893ec7dc 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -798,6 +798,17 @@ int iommu_group_unregister_notifier(struct
> iommu_group *group, }
>  EXPORT_SYMBOL_GPL(iommu_group_unregister_notifier);
>  
> +/**
> + * iommu_register_device_fault_handler() - Register a device fault
> handler
> + * @dev: the device
> + * @handler: the fault handler
> + * @data: private data passed as argument to the callback
> + *
> + * When an IOMMU fault event is received, call this handler with the
> fault event
> + * and data as argument.
> + *
> + * Return 0 if the fault handler was installed successfully, or an
> error.
> + */
>  int iommu_register_device_fault_handler(struct device *dev,
>  					iommu_dev_fault_handler_t
> handler, void *data)
> @@ -825,6 +836,13 @@ int iommu_register_device_fault_handler(struct
> device *dev, }
>  EXPORT_SYMBOL_GPL(iommu_register_device_fault_handler);
>  
> +/**
> + * iommu_unregister_device_fault_handler() - Unregister the device
> fault handler
> + * @dev: the device
> + *
> + * Remove the device fault handler installed with
> + * iommu_register_device_fault_handler().
> + */
>  int iommu_unregister_device_fault_handler(struct device *dev)
>  {
>  	struct iommu_param *idata = dev->iommu_param;
> @@ -840,19 +858,6 @@ int iommu_unregister_device_fault_handler(struct
> device *dev) }
>  EXPORT_SYMBOL_GPL(iommu_unregister_device_fault_handler);
>  
> -
> -int iommu_report_device_fault(struct device *dev, struct
> iommu_fault_event *evt) -{
> -	/* we only report device fault if there is a handler
> registered */
> -	if (!dev->iommu_param || !dev->iommu_param->fault_param ||
> -		!dev->iommu_param->fault_param->handler)
> -		return -ENOSYS;
> -
> -	return dev->iommu_param->fault_param->handler(evt,
> -
> dev->iommu_param->fault_param->data); -}
> -EXPORT_SYMBOL_GPL(iommu_report_device_fault);
> -
>  /**
>   * iommu_group_id - Return ID for a group
>   * @group: the group to ID
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 226ab4f3ae0e..65e56f28e0ce 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -205,6 +205,7 @@ struct page_response_msg {
>  	u32 resp_code:4;
>  #define IOMMU_PAGE_RESP_SUCCESS	0
>  #define IOMMU_PAGE_RESP_INVALID	1
> +#define IOMMU_PAGE_RESP_HANDLED	2
>  #define IOMMU_PAGE_RESP_FAILURE	0xF
>  
>  	u32 pasid_present:1;
> @@ -534,7 +535,6 @@ extern int
> iommu_register_device_fault_handler(struct device *dev, 
>  extern int iommu_unregister_device_fault_handler(struct device *dev);
>  
> -extern int iommu_report_device_fault(struct device *dev, struct
> iommu_fault_event *evt); extern int iommu_page_response(struct
> iommu_domain *domain, struct device *dev, struct page_response_msg
> *msg); 
> @@ -836,11 +836,6 @@ static inline bool
> iommu_has_device_fault_handler(struct device *dev) return false;
>  }
>  
> -static inline int iommu_report_device_fault(struct device *dev,
> struct iommu_fault_event *evt) -{
> -	return 0;
> -}
> -
>  static inline int iommu_page_response(struct iommu_domain *domain,
> struct device *dev, struct page_response_msg *msg)
>  {
> @@ -1005,4 +1000,31 @@ static inline struct mm_struct
> *iommu_sva_find(int pasid) }
>  #endif /* CONFIG_IOMMU_SVA */
>  
> +#ifdef CONFIG_IOMMU_FAULT
> +extern int iommu_fault_queue_register(struct notifier_block
> *flush_notifier); +extern void iommu_fault_queue_flush(struct device
> *dev); +extern void iommu_fault_queue_unregister(struct
> notifier_block *flush_notifier); +extern int
> iommu_report_device_fault(struct device *dev,
> +				     struct iommu_fault_event *evt);
> +#else /* CONFIG_IOMMU_FAULT */
> +static inline int iommu_fault_queue_register(struct notifier_block
> *flush_notifier) +{
> +	return -ENODEV;
> +}
> +
> +static inline void iommu_fault_queue_flush(struct device *dev)
> +{
> +}
> +
> +static inline void iommu_fault_queue_unregister(struct
> notifier_block *flush_notifier) +{
> +}
> +
> +static inline int iommu_report_device_fault(struct device *dev,
> +					    struct iommu_fault_event
> *evt) +{
> +	return 0;
> +}
> +#endif /* CONFIG_IOMMU_FAULT */
> +
>  #endif /* __LINUX_IOMMU_H */

[Jacob Pan]
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jacob Pan Feb. 14, 2018, 6:46 p.m. UTC | #12
On Mon, 12 Feb 2018 18:33:23 +0000
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:

> When a recoverable page fault is handled by the fault workqueue, find
> the associated mm and call handle_mm_fault.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/io-pgfault.c | 89
> ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 87
> insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
> index 33309ed316d2..565ec01a1b5f 100644
> --- a/drivers/iommu/io-pgfault.c
> +++ b/drivers/iommu/io-pgfault.c
> @@ -9,6 +9,7 @@
>  
>  #include <linux/iommu.h>
>  #include <linux/list.h>
> +#include <linux/sched/mm.h>
>  #include <linux/slab.h>
>  #include <linux/workqueue.h>
>  
> @@ -82,8 +83,92 @@ static int iommu_fault_complete(struct
> iommu_domain *domain, struct device *dev, 
>  static int iommu_fault_handle_single(struct iommu_fault_context
> *fault) {
> -	/* TODO */
> -	return -ENODEV;
> +	struct mm_struct *mm;
> +	struct vm_area_struct *vma;
> +	unsigned int access_flags = 0;
unsigned long to match vm_flags?
> +	int ret = IOMMU_PAGE_RESP_INVALID;
> +	unsigned int fault_flags = FAULT_FLAG_REMOTE;
> +	struct iommu_fault_event *evt = &fault->evt;
> +
> +	if (!evt->pasid_valid)
> +		return ret;
I guess for not we don't handle PRQ without PASID, right?
> +
> +	/*
> +	 * Special case: PASID Stop Marker (LRW = 0b100) doesn't
> expect a
> +	 * response. A Stop Marker may be generated when disabling a
> PASID
> +	 * (issuing a PASID stop request) in some PCI devices.
> +	 *
> +	 * When the mm_exit() callback returns from the device
> driver, no page
> +	 * request is generated for this PASID anymore and
> outstanding ones have
> +	 * been pushed to the IOMMU (as per PCIe 4.0r1.0 - 6.20.1
> and 10.4.1.2 -
> +	 * Managing PASID TLP Prefix Usage). Some PCI devices will
> wait for all
> +	 * outstanding page requests to come back with a response
> before
> +	 * completing the PASID stop request. Others do not wait for
> page
> +	 * responses, and instead issue this Stop Marker that tells
> us when the
> +	 * PASID can be reallocated.
> +	 *
> +	 * We ignore the Stop Marker because:
> +	 * a. Page requests, which are posted requests, have been
> flushed to the
> +	 *    IOMMU when mm_exit() returns,
> +	 * b. We flush all fault queues after mm_exit() returns and
> before
> +	 *    freeing the PASID.
> +	 *
> +	 * So even though the Stop Marker might be issued by the
> device *after*
> +	 * the stop request completes, outstanding faults will have
> been dealt
> +	 * with by the time we free the PASID.
> +	 */
> +	if (evt->last_req &&
> +	    !(evt->prot & (IOMMU_FAULT_READ | IOMMU_FAULT_WRITE)))
> +		return IOMMU_PAGE_RESP_HANDLED;
> +
If we don't expect a page response, shouldn't it be filtered by the
IOMMU vendor driver in the first place? i.e. in the vendor IOMMU driver
PRQ handler, it will sanitize the request anyway, for anything that
does not need response, it will not call iommu_report_device_fault().
> +	mm = iommu_sva_find(evt->pasid);
> +	if (!mm)
> +		return ret;
> +
> +	down_read(&mm->mmap_sem);
> +
> +	vma = find_extend_vma(mm, evt->addr);
> +	if (!vma)
> +		/* Unmapped area */
> +		goto out_put_mm;
> +
> +	if (evt->prot & IOMMU_FAULT_READ)
> +		access_flags |= VM_READ;
> +
> +	if (evt->prot & IOMMU_FAULT_WRITE) {
> +		access_flags |= VM_WRITE;
> +		fault_flags |= FAULT_FLAG_WRITE;
> +	}
> +
> +	if (evt->prot & IOMMU_FAULT_EXEC) {
> +		access_flags |= VM_EXEC;
> +		fault_flags |= FAULT_FLAG_INSTRUCTION;
> +	}
> +
> +	if (!(evt->prot & IOMMU_FAULT_PRIV))
> +		fault_flags |= FAULT_FLAG_USER;
> +
> +	if (access_flags & ~vma->vm_flags)
> +		/* Access fault */
> +		goto out_put_mm;
> +
> +	ret = handle_mm_fault(vma, evt->addr, fault_flags);
> +	ret = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID :
> +		IOMMU_PAGE_RESP_SUCCESS;
> +
> +out_put_mm:
> +	up_read(&mm->mmap_sem);
> +
> +	/*
> +	 * If the process exits while we're handling the fault on
> its mm, we
> +	 * can't do mmput(). exit_mmap() would release the MMU
> notifier, calling
> +	 * iommu_notifier_release(), which has to flush the fault
> queue that
> +	 * we're executing on... So mmput_async() moves the release
> of the mm to
> +	 * another thread, if we're the last user.
> +	 */
> +	mmput_async(mm);
> +
> +	return ret;
>  }
>  
>  static void iommu_fault_handle_group(struct work_struct *work)

[Jacob Pan]
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Joerg Roedel Feb. 15, 2018, 9:59 a.m. UTC | #13
On Mon, Feb 12, 2018 at 06:33:16PM +0000, Jean-Philippe Brucker wrote:
  
> +config IOMMU_SVA
> +	bool "Shared Virtual Addressing API for the IOMMU"
> +	select IOMMU_API
> +	help
> +	  Enable process address space management for the IOMMU API. In systems
> +	  that support it, device drivers can bind process address spaces to
> +	  devices and share their page tables using this API.
> +
> +	  If unsure, say N here.

I think this should be an option selected by IOMMU driver and not be
activly selectable by the user.

> +/**
> + * iommu_sva_device_shutdown() - Shutdown Shared Virtual Addressing for a device
> + * @dev: the device
> + *
> + * Disable SVA. The device should not be performing any DMA while this function
> + * is running.

Is this a good idea? How about devices that get hot-unplugged while
processes still use them and there is DMA going back and forth? This
function can be the point to shut down all ongoing stuff first and the
shutdown the device.

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker Feb. 15, 2018, 12:40 p.m. UTC | #14
On 13/02/18 23:34, Tian, Kevin wrote:
>> From: Jean-Philippe Brucker
>> Sent: Tuesday, February 13, 2018 8:57 PM
>>
>> On 13/02/18 07:54, Tian, Kevin wrote:
>>>> From: Jean-Philippe Brucker
>>>> Sent: Tuesday, February 13, 2018 2:33 AM
>>>>
>>>> Add bind() and unbind() operations to the IOMMU API. Device drivers
>> can
>>>> use them to share process page tables with their devices. bind_group()
>>>> is provided for VFIO's convenience, as it needs to provide a coherent
>>>> interface on containers. Other device drivers will most likely want to
>>>> use bind_device(), which binds a single device in the group.
>>>
>>> I saw your bind_group implementation tries to bind the address space
>>> for all devices within a group, which IMO has some problem. Based on
>> PCIe
>>> spec, packet routing on the bus doesn't take PASID into consideration.
>>> since devices within same group cannot be isolated based on requestor-
>> ID
>>> i.e. traffic not guaranteed going to IOMMU, enabling SVA on multiple
>> devices
>>> could cause undesired p2p.
>> But so does enabling "classic" DMA... If two devices are not protected by
>> ACS for example, they are put in the same IOMMU group, and one device
>> might be able to snoop the other's DMA. VFIO allows userspace to create a
>> container for them and use MAP/UNMAP, but makes it explicit to the user
>> that for DMA, these devices are not isolated and must be considered as a
>> single device (you can't pass them to different VMs or put them in
>> different containers). So I tried to keep the same idea as MAP/UNMAP for
>> SVA, performing BIND/UNBIND operations on the VFIO container instead of
>> the device.
> 
> there is a small difference. for classic DMA we can reserve PCI BARs 
> when allocating IOVA, thus multiple devices in the same group can 
> still work correctly applied with same translation, if isolation is not
> cared in between. However for SVA it's CPU virtual addresses 
> managed by kernel mm thus difficult to introduce similar address 
> reservation. Then it's possible for a VA falling into other device's 
> BAR in the same group and cause undesired p2p traffic. In such 
> regard, SVA is actually functionally-broken.

I think the problem exists even if there is a single device in the group.
If for example, malloc() returns a VA that corresponds to a PCI host
bridge in IOVA space, performing DMA on that buffer won't reach the IOMMU
and will cause undesirable side-effects.

My series doesn't address the problem, but I believe we should carve
reserved regions out of the process address space during bind(), for
example by creating a PROT_NONE vma preventing userspace from obtaining
that VA.

If you solve this problem, you also solve it for multiple devices in a
group, because the IOMMU core provides the resv API on groups... That's
until you hotplug a device into a live group (currently WARN in VFIO),
with different resv regions.

>> I kept the analogy simple though, because I don't think there will be many
>> SVA-capable systems that require IOMMU groups. They will likely
> 
> I agree that multiple SVA-capable devices in same IOMMU group is not
> a typical configuration, especially it's usually observed on new devices.
> Then based on above limitation, I think we could just explicitly avoid
> enabling SVA in such case. :-)

I'd certainly like that :)

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker Feb. 15, 2018, 12:42 p.m. UTC | #15
On 13/02/18 23:43, Tian, Kevin wrote:
>> From: Jean-Philippe Brucker
>> Sent: Tuesday, February 13, 2018 8:40 PM
>>
>>
>> [...]
>>>> +
>>>> +/**
>>>> + * iommu_sva_device_init() - Initialize Shared Virtual Addressing for a
>>>> device
>>>> + * @dev: the device
>>>> + * @features: bitmask of features that need to be initialized
>>>> + * @max_pasid: max PASID value supported by the device
>>>> + *
>>>> + * Users of the bind()/unbind() API must call this function to initialize all
>>>> + * features required for SVA.
>>>> + *
>>>> + * - If the device should support multiple address spaces (e.g. PCI
>> PASID),
>>>> + *   IOMMU_SVA_FEAT_PASID must be requested.
>>>
>>> I think it is by default assumed when using this API, based on definition of
>>> SVA. Can you elaborate the situation where this flag can be cleared?
>>
>> When passing a device to userspace, you could also share its non-pasid
>> address space with the process. It requires a new domain type so is left
>> as a TODO in patch 2/37. I did get requests for this feature, though I
>> think it was mostly for prototyping. I guess I could remove the flag, and
>> reintroduce it as IOMMU_SVA_FEAT_NO_PASID later on.
> 
> sorry I still didn't get the definition of non-pasid address space. 
> Did you mean the GPA/IOVA address space and no_pasid implies
> actually some default PASID associated?

Yes I mean merging the process address space and IOVA space. There are no
PASIDs involved if the device or the IOMMU doesn't support it. Instead of
private DMA page tables you program the mm pgd into the IOMMU. A VFIO
userspace driver, instead of sending MAP/UNMAP ioctl, could simply issue a
BIND.

Technically nothing prevents it, but now the resv problem discussed on
patch 2/37 stands out. For example on x86 you'd probably need to carve the
IOAPIC MSI range out of the process address space. On Arm you'd need to
create a write-only mapping for MSIs (IOMMU translates it to the IRQ chip
address, but thankfully accessing the doorbell from CPU side doesn't
trigger an MSI.)

>> [...]
>>>> +	ret = domain->ops->sva_device_init(dev, features, &min_pasid,
>>>> +					   &max_pasid);
>>>> +	if (ret)
>>>> +		return ret;
>>>> +
>>>> +	/* FIXME: racy. Next version should have a mutex (same as fault
>>>> handler) */
>>>> +	dev_param->sva_features = features;
>>>> +	dev_param->min_pasid = min_pasid;
>>>> +	dev_param->max_pasid = max_pasid;
>>>
>>> what's the point of min_pasid here?
>>
>> Arm SMMUv3 uses entry 0 of the PASID table for the default (non-pasid)
>> context, so it needs to set min_pasid to 1. AMD IOMMU recently added a
>> similar feature (GIoSup), if I understood correctly.
>>
> 
> just for such purpose maybe we should just define a reserved_pasid
> otherwise there will be some waste if an implementation allows it
> non-zero.

What's wasted? It's slightly simpler to use min_pasid because we just pass
that limit to idr_alloc(). With a reserved_pasid we'll have to call
idr_alloc(reserved_pasid) once, for the same result.

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alex Williamson Feb. 16, 2018, 7:33 p.m. UTC | #16
On Mon, 12 Feb 2018 18:33:52 +0000
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:

> Add two new ioctl for VFIO containers. VFIO_IOMMU_BIND_PROCESS creates a
> bond between a container and a process address space, identified by a
> device-specific ID named PASID. This allows the device to target DMA
> transactions at the process virtual addresses without a need for mapping
> and unmapping buffers explicitly in the IOMMU. The process page tables are
> shared with the IOMMU, and mechanisms such as PCI ATS/PRI are used to
> handle faults. VFIO_IOMMU_UNBIND_PROCESS removes a bond created with
> VFIO_IOMMU_BIND_PROCESS.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 399 ++++++++++++++++++++++++++++++++++++++++
>  include/uapi/linux/vfio.h       |  76 ++++++++
>  2 files changed, 475 insertions(+)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index e30e29ae4819..cac066f0026b 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -30,6 +30,7 @@
>  #include <linux/iommu.h>
>  #include <linux/module.h>
>  #include <linux/mm.h>
> +#include <linux/ptrace.h>
>  #include <linux/rbtree.h>
>  #include <linux/sched/signal.h>
>  #include <linux/sched/mm.h>
> @@ -60,6 +61,7 @@ MODULE_PARM_DESC(disable_hugepages,
>  
>  struct vfio_iommu {
>  	struct list_head	domain_list;
> +	struct list_head	mm_list;
>  	struct vfio_domain	*external_domain; /* domain for external user */
>  	struct mutex		lock;
>  	struct rb_root		dma_list;
> @@ -90,6 +92,15 @@ struct vfio_dma {
>  struct vfio_group {
>  	struct iommu_group	*iommu_group;
>  	struct list_head	next;
> +	bool			sva_enabled;
> +};
> +
> +struct vfio_mm {
> +#define VFIO_PASID_INVALID	(-1)
> +	spinlock_t		lock;
> +	int			pasid;
> +	struct mm_struct	*mm;
> +	struct list_head	next;
>  };
>  
>  /*
> @@ -1117,6 +1128,157 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
>  	return 0;
>  }
>  
> +static int vfio_iommu_mm_exit(struct device *dev, int pasid, void *data)
> +{
> +	struct vfio_mm *vfio_mm = data;
> +
> +	/*
> +	 * The mm_exit callback cannot block, so we can't take the iommu mutex
> +	 * and remove this vfio_mm from the list. Hopefully the SVA code will
> +	 * relax its locking requirement in the future.
> +	 *
> +	 * We mostly care about attach_group, which will attempt to replay all
> +	 * binds in this container. Ensure that it doesn't touch this defunct mm
> +	 * struct, by clearing the pointer. The structure will be freed when the
> +	 * group is removed from the container.
> +	 */
> +	spin_lock(&vfio_mm->lock);
> +	vfio_mm->mm = NULL;
> +	spin_unlock(&vfio_mm->lock);
> +
> +	return 0;
> +}
> +
> +static int vfio_iommu_sva_init(struct device *dev, void *data)
> +{
> +
> +	int ret;
> +
> +	ret = iommu_sva_device_init(dev, IOMMU_SVA_FEAT_PASID |
> +				    IOMMU_SVA_FEAT_IOPF, 0);
> +	if (ret)
> +		return ret;
> +
> +	return iommu_register_mm_exit_handler(dev, vfio_iommu_mm_exit);
> +}
> +
> +static int vfio_iommu_sva_shutdown(struct device *dev, void *data)
> +{
> +	iommu_sva_device_shutdown(dev);
> +	iommu_unregister_mm_exit_handler(dev);

Typically the order would be reverse of the setup, is it correct this
way?

> +
> +	return 0;
> +}
> +
> +static int vfio_iommu_bind_group(struct vfio_iommu *iommu,
> +				 struct vfio_group *group,
> +				 struct vfio_mm *vfio_mm)
> +{
> +	int ret;
> +	int pasid;
> +
> +	if (!group->sva_enabled) {
> +		ret = iommu_group_for_each_dev(group->iommu_group, NULL,
> +					       vfio_iommu_sva_init);
> +		if (ret)
> +			return ret;

Seems were at an unknown state here, do we need to undo any that
succeeded?

> +
> +		group->sva_enabled = true;
> +	}
> +
> +	ret = iommu_sva_bind_group(group->iommu_group, vfio_mm->mm, &pasid,
> +				   IOMMU_SVA_FEAT_PASID | IOMMU_SVA_FEAT_IOPF,
> +				   vfio_mm);
> +	if (ret)
> +		return ret;
> +
> +	if (WARN_ON(vfio_mm->pasid != VFIO_PASID_INVALID && pasid !=
> +		    vfio_mm->pasid))
> +		return -EFAULT;
> +
> +	vfio_mm->pasid = pasid;
> +
> +	return 0;
> +}
> +
> +static void vfio_iommu_unbind_group(struct vfio_group *group,
> +				    struct vfio_mm *vfio_mm)
> +{
> +	iommu_sva_unbind_group(group->iommu_group, vfio_mm->pasid);
> +}
> +
> +static void vfio_iommu_unbind(struct vfio_iommu *iommu,
> +			      struct vfio_mm *vfio_mm)
> +{
> +	struct vfio_group *group;
> +	struct vfio_domain *domain;
> +
> +	list_for_each_entry(domain, &iommu->domain_list, next)
> +		list_for_each_entry(group, &domain->group_list, next)
> +			vfio_iommu_unbind_group(group, vfio_mm);
> +}
> +
> +static bool vfio_mm_get(struct vfio_mm *vfio_mm)
> +{
> +	bool ret;
> +
> +	spin_lock(&vfio_mm->lock);
> +	ret = vfio_mm->mm && mmget_not_zero(vfio_mm->mm);
> +	spin_unlock(&vfio_mm->lock);
> +
> +	return ret;
> +}
> +
> +static void vfio_mm_put(struct vfio_mm *vfio_mm)
> +{
> +	mmput(vfio_mm->mm);
> +}
> +
> +static int vfio_iommu_replay_bind(struct vfio_iommu *iommu, struct vfio_group *group)
> +{
> +	int ret = 0;
> +	struct vfio_mm *vfio_mm;
> +
> +	list_for_each_entry(vfio_mm, &iommu->mm_list, next) {
> +		/*
> +		 * Ensure mm doesn't exit while we're binding it to the new
> +		 * group.
> +		 */
> +		if (!vfio_mm_get(vfio_mm))
> +			continue;
> +		ret = vfio_iommu_bind_group(iommu, group, vfio_mm);
> +		vfio_mm_put(vfio_mm);
> +
> +		if (ret)
> +			goto out_unbind;
> +	}
> +
> +	return 0;
> +
> +out_unbind:
> +	list_for_each_entry_continue_reverse(vfio_mm, &iommu->mm_list, next) {
> +		if (!vfio_mm_get(vfio_mm))
> +			continue;
> +		iommu_sva_unbind_group(group->iommu_group, vfio_mm->pasid);
> +		vfio_mm_put(vfio_mm);
> +	}
> +
> +	return ret;
> +}
> +
> +static void vfio_iommu_free_all_mm(struct vfio_iommu *iommu)
> +{
> +	struct vfio_mm *vfio_mm, *tmp;
> +
> +	/*
> +	 * No need for unbind() here. Since all groups are detached from this
> +	 * iommu, bonds have been removed.
> +	 */
> +	list_for_each_entry_safe(vfio_mm, tmp, &iommu->mm_list, next)
> +		kfree(vfio_mm);
> +	INIT_LIST_HEAD(&iommu->mm_list);
> +}
> +
>  /*
>   * We change our unmap behavior slightly depending on whether the IOMMU
>   * supports fine-grained superpages.  IOMMUs like AMD-Vi will use a superpage
> @@ -1301,6 +1463,15 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>  		    d->prot == domain->prot) {
>  			iommu_detach_group(domain->domain, iommu_group);
>  			if (!iommu_attach_group(d->domain, iommu_group)) {
> +				if (vfio_iommu_replay_bind(iommu, group)) {
> +					iommu_detach_group(d->domain, iommu_group);
> +					ret = iommu_attach_group(domain->domain,
> +								 iommu_group);
> +					if (ret)
> +						goto out_domain;
> +					continue;
> +				}
> +
>  				list_add(&group->next, &d->group_list);
>  				iommu_domain_free(domain->domain);
>  				kfree(domain);
> @@ -1321,6 +1492,10 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>  	if (ret)
>  		goto out_detach;
>  
> +	ret = vfio_iommu_replay_bind(iommu, group);
> +	if (ret)
> +		goto out_detach;
> +
>  	if (resv_msi) {
>  		ret = iommu_get_msi_cookie(domain->domain, resv_msi_base);
>  		if (ret)
> @@ -1426,6 +1601,11 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  			continue;
>  
>  		iommu_detach_group(domain->domain, iommu_group);
> +		if (group->sva_enabled) {
> +			iommu_group_for_each_dev(iommu_group, NULL,
> +						 vfio_iommu_sva_shutdown);
> +			group->sva_enabled = false;
> +		}
>  		list_del(&group->next);
>  		kfree(group);
>  		/*
> @@ -1441,6 +1621,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  					vfio_iommu_unmap_unpin_all(iommu);
>  				else
>  					vfio_iommu_unmap_unpin_reaccount(iommu);
> +				vfio_iommu_free_all_mm(iommu);
>  			}
>  			iommu_domain_free(domain->domain);
>  			list_del(&domain->next);
> @@ -1475,6 +1656,7 @@ static void *vfio_iommu_type1_open(unsigned long arg)
>  	}
>  
>  	INIT_LIST_HEAD(&iommu->domain_list);
> +	INIT_LIST_HEAD(&iommu->mm_list);
>  	iommu->dma_list = RB_ROOT;
>  	mutex_init(&iommu->lock);
>  	BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier);
> @@ -1509,6 +1691,7 @@ static void vfio_iommu_type1_release(void *iommu_data)
>  		kfree(iommu->external_domain);
>  	}
>  
> +	vfio_iommu_free_all_mm(iommu);
>  	vfio_iommu_unmap_unpin_all(iommu);
>  
>  	list_for_each_entry_safe(domain, domain_tmp,
> @@ -1537,6 +1720,184 @@ static int vfio_domains_have_iommu_cache(struct vfio_iommu *iommu)
>  	return ret;
>  }
>  
> +static struct mm_struct *vfio_iommu_get_mm_by_vpid(pid_t vpid)
> +{
> +	struct mm_struct *mm;
> +	struct task_struct *task;
> +
> +	rcu_read_lock();
> +	task = find_task_by_vpid(vpid);
> +	if (task)
> +		get_task_struct(task);
> +	rcu_read_unlock();
> +	if (!task)
> +		return ERR_PTR(-ESRCH);
> +
> +	/* Ensure that current has RW access on the mm */
> +	mm = mm_access(task, PTRACE_MODE_ATTACH_REALCREDS);
> +	put_task_struct(task);
> +
> +	if (!mm)
> +		return ERR_PTR(-ESRCH);
> +
> +	return mm;
> +}
> +
> +static long vfio_iommu_type1_bind_process(struct vfio_iommu *iommu,
> +					  void __user *arg,
> +					  struct vfio_iommu_type1_bind *bind)
> +{
> +	struct vfio_iommu_type1_bind_process params;
> +	struct vfio_domain *domain;
> +	struct vfio_group *group;
> +	struct vfio_mm *vfio_mm;
> +	struct mm_struct *mm;
> +	unsigned long minsz;
> +	int ret = 0;
> +
> +	minsz = sizeof(*bind) + sizeof(params);
> +	if (bind->argsz < minsz)
> +		return -EINVAL;
> +
> +	arg += sizeof(*bind);
> +	if (copy_from_user(&params, arg, sizeof(params)))
> +		return -EFAULT;
> +
> +	if (params.flags & ~VFIO_IOMMU_BIND_PID)
> +		return -EINVAL;
> +
> +	if (params.flags & VFIO_IOMMU_BIND_PID) {
> +		mm = vfio_iommu_get_mm_by_vpid(params.pid);
> +		if (IS_ERR(mm))
> +			return PTR_ERR(mm);
> +	} else {
> +		mm = get_task_mm(current);
> +		if (!mm)
> +			return -EINVAL;
> +	}
> +
> +	mutex_lock(&iommu->lock);
> +	if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) {
> +		ret = -EINVAL;
> +		goto out_put_mm;
> +	}
> +
> +	list_for_each_entry(vfio_mm, &iommu->mm_list, next) {
> +		if (vfio_mm->mm != mm)
> +			continue;
> +
> +		params.pasid = vfio_mm->pasid;
> +
> +		ret = copy_to_user(arg, &params, sizeof(params)) ? -EFAULT : 0;
> +		goto out_put_mm;
> +	}
> +
> +	vfio_mm = kzalloc(sizeof(*vfio_mm), GFP_KERNEL);
> +	if (!vfio_mm) {
> +		ret = -ENOMEM;
> +		goto out_put_mm;
> +	}
> +
> +	vfio_mm->mm = mm;
> +	vfio_mm->pasid = VFIO_PASID_INVALID;
> +	spin_lock_init(&vfio_mm->lock);
> +
> +	list_for_each_entry(domain, &iommu->domain_list, next) {
> +		list_for_each_entry(group, &domain->group_list, next) {
> +			ret = vfio_iommu_bind_group(iommu, group, vfio_mm);
> +			if (ret)
> +				break;
> +		}
> +		if (ret)
> +			break;
> +	}
> +
> +	if (ret) {
> +		/* Undo all binds that already succeeded */
> +		list_for_each_entry_continue_reverse(group, &domain->group_list,
> +						     next)
> +			vfio_iommu_unbind_group(group, vfio_mm);
> +		list_for_each_entry_continue_reverse(domain, &iommu->domain_list,
> +						     next)
> +			list_for_each_entry(group, &domain->group_list, next)
> +				vfio_iommu_unbind_group(group, vfio_mm);
> +		kfree(vfio_mm);
> +	} else {
> +		list_add(&vfio_mm->next, &iommu->mm_list);
> +
> +		params.pasid = vfio_mm->pasid;
> +		ret = copy_to_user(arg, &params, sizeof(params)) ? -EFAULT : 0;
> +		if (ret) {
> +			vfio_iommu_unbind(iommu, vfio_mm);
> +			kfree(vfio_mm);
> +		}
> +	}
> +
> +out_put_mm:
> +	mutex_unlock(&iommu->lock);
> +	mmput(mm);
> +
> +	return ret;
> +}
> +
> +static long vfio_iommu_type1_unbind_process(struct vfio_iommu *iommu,
> +					    void __user *arg,
> +					    struct vfio_iommu_type1_bind *bind)
> +{
> +	int ret = -EINVAL;
> +	unsigned long minsz;
> +	struct mm_struct *mm;
> +	struct vfio_mm *vfio_mm;
> +	struct vfio_iommu_type1_bind_process params;
> +
> +	minsz = sizeof(*bind) + sizeof(params);
> +	if (bind->argsz < minsz)
> +		return -EINVAL;
> +
> +	arg += sizeof(*bind);
> +	if (copy_from_user(&params, arg, sizeof(params)))
> +		return -EFAULT;
> +
> +	if (params.flags & ~VFIO_IOMMU_BIND_PID)
> +		return -EINVAL;
> +
> +	/*
> +	 * We can't simply unbind a foreign process by PASID, because the
> +	 * process might have died and the PASID might have been reallocated to
> +	 * another process. Instead we need to fetch that process mm by PID
> +	 * again to make sure we remove the right vfio_mm. In addition, holding
> +	 * the mm guarantees that mm_users isn't dropped while we unbind and the
> +	 * exit_mm handler doesn't fire. While not strictly necessary, not
> +	 * having to care about that race simplifies everyone's life.
> +	 */
> +	if (params.flags & VFIO_IOMMU_BIND_PID) {
> +		mm = vfio_iommu_get_mm_by_vpid(params.pid);
> +		if (IS_ERR(mm))
> +			return PTR_ERR(mm);

I don't understand how this works for a process that has exited, the
mm_exit function gets called to clear vfio_mm.mm, the above may or may
not work (could be new ptrace'able process with same pid), but it won't
match the mm below, so is the vfio_mm that mm_exit zapped forever stuck
in this list until the container is destroyed?

> +	} else {
> +		mm = get_task_mm(current);
> +		if (!mm)
> +			return -EINVAL;
> +	}
> +
> +	ret = -ESRCH;
> +	mutex_lock(&iommu->lock);
> +	list_for_each_entry(vfio_mm, &iommu->mm_list, next) {
> +		if (vfio_mm->mm != mm)
> +			continue;
> +
> +		vfio_iommu_unbind(iommu, vfio_mm);
> +		list_del(&vfio_mm->next);
> +		kfree(vfio_mm);
> +		ret = 0;
> +		break;
> +	}
> +	mutex_unlock(&iommu->lock);
> +	mmput(mm);
> +
> +	return ret;
> +}
> +
>  static long vfio_iommu_type1_ioctl(void *iommu_data,
>  				   unsigned int cmd, unsigned long arg)
>  {
> @@ -1607,6 +1968,44 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>  
>  		return copy_to_user((void __user *)arg, &unmap, minsz) ?
>  			-EFAULT : 0;
> +
> +	} else if (cmd == VFIO_IOMMU_BIND) {
> +		struct vfio_iommu_type1_bind bind;
> +
> +		minsz = offsetofend(struct vfio_iommu_type1_bind, mode);
> +
> +		if (copy_from_user(&bind, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (bind.argsz < minsz)
> +			return -EINVAL;
> +
> +		switch (bind.mode) {
> +		case VFIO_IOMMU_BIND_PROCESS:
> +			return vfio_iommu_type1_bind_process(iommu, (void *)arg,
> +							     &bind);
> +		default:
> +			return -EINVAL;
> +		}
> +
> +	} else if (cmd == VFIO_IOMMU_UNBIND) {
> +		struct vfio_iommu_type1_bind bind;
> +
> +		minsz = offsetofend(struct vfio_iommu_type1_bind, mode);
> +
> +		if (copy_from_user(&bind, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (bind.argsz < minsz)
> +			return -EINVAL;
> +
> +		switch (bind.mode) {
> +		case VFIO_IOMMU_BIND_PROCESS:
> +			return vfio_iommu_type1_unbind_process(iommu, (void *)arg,
> +							       &bind);
> +		default:
> +			return -EINVAL;
> +		}
>  	}
>  
>  	return -ENOTTY;
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index c74372163ed2..e1b9b8c58916 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -638,6 +638,82 @@ struct vfio_iommu_type1_dma_unmap {
>  #define VFIO_IOMMU_ENABLE	_IO(VFIO_TYPE, VFIO_BASE + 15)
>  #define VFIO_IOMMU_DISABLE	_IO(VFIO_TYPE, VFIO_BASE + 16)
>  
> +/*
> + * VFIO_IOMMU_BIND_PROCESS
> + *
> + * Allocate a PASID for a process address space, and use it to attach this
> + * process to all devices in the container. Devices can then tag their DMA
> + * traffic with the returned @pasid to perform transactions on the associated
> + * virtual address space. Mapping and unmapping buffers is performed by standard
> + * functions such as mmap and malloc.
> + *
> + * If flag is VFIO_IOMMU_BIND_PID, @pid contains the pid of a foreign process to
> + * bind. Otherwise the current task is bound. Given that the caller owns the
> + * device, setting this flag grants the caller read and write permissions on the
> + * entire address space of foreign process described by @pid. Therefore,
> + * permission to perform the bind operation on a foreign process is governed by
> + * the ptrace access mode PTRACE_MODE_ATTACH_REALCREDS check. See man ptrace(2)
> + * for more information.
> + *
> + * On success, VFIO writes a Process Address Space ID (PASID) into @pasid. This
> + * ID is unique to a process and can be used on all devices in the container.
> + *
> + * On fork, the child inherits the device fd and can use the bonds setup by its
> + * parent. Consequently, the child has R/W access on the address spaces bound by
> + * its parent. After an execv, the device fd is closed and the child doesn't
> + * have access to the address space anymore.
> + *
> + * To remove a bond between process and container, VFIO_IOMMU_UNBIND ioctl is
> + * issued with the same parameters. If a pid was specified in VFIO_IOMMU_BIND,
> + * it should also be present for VFIO_IOMMU_UNBIND. Otherwise unbind the current
> + * task from the container.
> + */
> +struct vfio_iommu_type1_bind_process {
> +	__u32	flags;
> +#define VFIO_IOMMU_BIND_PID		(1 << 0)
> +	__u32	pasid;
> +	__s32	pid;
> +};
> +
> +/*
> + * Only mode supported at the moment is VFIO_IOMMU_BIND_PROCESS, which takes
> + * vfio_iommu_type1_bind_process in data.
> + */
> +struct vfio_iommu_type1_bind {
> +	__u32	argsz;
> +	__u32	mode;

s/mode/flags/

> +#define VFIO_IOMMU_BIND_PROCESS		(1 << 0)
> +	__u8	data[];
> +};

I'm not convinced having a separate vfio_iommu_type1_bind_process
struct is necessary.  It seems like we always expect to return a pasid,
only the pid is optional, but that could be handled by a single
structure with a flag bit to indicate a pid bind is requested.

> +
> +/*
> + * VFIO_IOMMU_BIND - _IOWR(VFIO_TYPE, VFIO_BASE + 22, struct vfio_iommu_bind)

vfio_iommu_type1_bind

> + *
> + * Manage address spaces of devices in this container. Initially a TYPE1
> + * container can only have one address space, managed with
> + * VFIO_IOMMU_MAP/UNMAP_DMA.
> + *
> + * An IOMMU of type VFIO_TYPE1_NESTING_IOMMU can be managed by both MAP/UNMAP
> + * and BIND ioctls at the same time. MAP/UNMAP acts on the stage-2 (host) page
> + * tables, and BIND manages the stage-1 (guest) page tables. Other types of
> + * IOMMU may allow MAP/UNMAP and BIND to coexist, where MAP/UNMAP controls
> + * non-PASID traffic and BIND controls PASID traffic. But this depends on the
> + * underlying IOMMU architecture and isn't guaranteed.
> + *
> + * Availability of this feature depends on the device, its bus, the underlying
> + * IOMMU and the CPU architecture.
> + *
> + * returns: 0 on success, -errno on failure.
> + */
> +#define VFIO_IOMMU_BIND		_IO(VFIO_TYPE, VFIO_BASE + 22)
> +
> +/*
> + * VFIO_IOMMU_UNBIND - _IOWR(VFIO_TYPE, VFIO_BASE + 23, struct vfio_iommu_bind)

vifo_iommu_type1_bind

> + *
> + * Undo what was done by the corresponding VFIO_IOMMU_BIND ioctl.
> + */
> +#define VFIO_IOMMU_UNBIND	_IO(VFIO_TYPE, VFIO_BASE + 23)
> +
>  /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
>  
>  /*

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jacob Pan Feb. 20, 2018, 11:19 p.m. UTC | #17
On Mon, 12 Feb 2018 18:33:24 +0000
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:

>  
> +/**
> + * enum page_response_code - Return status of fault handlers,
> telling the IOMMU
> + * driver how to proceed with the fault.
> + *
> + * @IOMMU_FAULT_STATUS_HANDLED: Stop processing the fault, and do
> not send a
> + *	reply to the device.
> + * @IOMMU_FAULT_STATUS_CONTINUE: Fault was not handled. Call the
> next handler,
> + *	or terminate.
> + * @IOMMU_FAULT_STATUS_SUCCESS: Fault has been handled and the page
> tables
> + *	populated, retry the access. This is "Success" in PCI PRI.
> + * @IOMMU_FAULT_STATUS_FAILURE: General error. Drop all subsequent
> faults from
> + *	this device if possible. This is "Response Failure" in PCI
> PRI.
> + * @IOMMU_FAULT_STATUS_INVALID: Could not handle this fault, don't
> retry the
> + *	access. This is "Invalid Request" in PCI PRI.
> + */
> +enum page_response_code {
> +	IOMMU_PAGE_RESP_HANDLED = 0,
> +	IOMMU_PAGE_RESP_CONTINUE,
> +	IOMMU_PAGE_RESP_SUCCESS,
> +	IOMMU_PAGE_RESP_INVALID,
> +	IOMMU_PAGE_RESP_FAILURE,
> +};
it seems to me two things are mixed here:
1. driver handler response status (HANDLED, CONTINUE)
2. PCI standard page response code (the rest)
Can we leave them separate? then we don't have to convert this enum
to/from PCI ATS page response code.

> +
>  /**
>   * Generic page response information based on PCI ATS and PASID spec.
>   * @addr: servicing page address
> @@ -202,12 +225,7 @@ enum page_response_type {
>  struct page_response_msg {
>  	u64 addr;
>  	u32 pasid;
> -	u32 resp_code:4;
> -#define IOMMU_PAGE_RESP_SUCCESS	0
> -#define IOMMU_PAGE_RESP_INVALID	1
> -#define IOMMU_PAGE_RESP_HANDLED	2
> -#define IOMMU_PAGE_RESP_FAILURE	0xF
> -
[Jacob Pan]
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tian, Kevin Feb. 27, 2018, 6:21 a.m. UTC | #18
> From: Jean-Philippe Brucker [mailto:jean-philippe.brucker@arm.com]
> Sent: Thursday, February 15, 2018 8:42 PM
> 
> On 13/02/18 23:43, Tian, Kevin wrote:
> >> From: Jean-Philippe Brucker
> >> Sent: Tuesday, February 13, 2018 8:40 PM
> >>
> >>
> >> [...]
> >>>> +
> >>>> +/**
> >>>> + * iommu_sva_device_init() - Initialize Shared Virtual Addressing for
> a
> >>>> device
> >>>> + * @dev: the device
> >>>> + * @features: bitmask of features that need to be initialized
> >>>> + * @max_pasid: max PASID value supported by the device
> >>>> + *
> >>>> + * Users of the bind()/unbind() API must call this function to initialize
> all
> >>>> + * features required for SVA.
> >>>> + *
> >>>> + * - If the device should support multiple address spaces (e.g. PCI
> >> PASID),
> >>>> + *   IOMMU_SVA_FEAT_PASID must be requested.
> >>>
> >>> I think it is by default assumed when using this API, based on definition
> of
> >>> SVA. Can you elaborate the situation where this flag can be cleared?
> >>
> >> When passing a device to userspace, you could also share its non-pasid
> >> address space with the process. It requires a new domain type so is left
> >> as a TODO in patch 2/37. I did get requests for this feature, though I
> >> think it was mostly for prototyping. I guess I could remove the flag, and
> >> reintroduce it as IOMMU_SVA_FEAT_NO_PASID later on.
> >
> > sorry I still didn't get the definition of non-pasid address space.
> > Did you mean the GPA/IOVA address space and no_pasid implies
> > actually some default PASID associated?
> 
> Yes I mean merging the process address space and IOVA space. There are
> no
> PASIDs involved if the device or the IOMMU doesn't support it. Instead of
> private DMA page tables you program the mm pgd into the IOMMU. A VFIO
> userspace driver, instead of sending MAP/UNMAP ioctl, could simply issue
> a
> BIND.

got it. yes it's better to remove it for now which can avoid
unnecessary confusion. :-)

> 
> Technically nothing prevents it, but now the resv problem discussed on
> patch 2/37 stands out. For example on x86 you'd probably need to carve
> the
> IOAPIC MSI range out of the process address space. On Arm you'd need to
> create a write-only mapping for MSIs (IOMMU translates it to the IRQ chip
> address, but thankfully accessing the doorbell from CPU side doesn't
> trigger an MSI.)

so if overlap already exists when binding a process address space
(since binding may happen much later than creating the process),
I assume the call will simply fail since carve out at this point is not
possible?

> 
> >> [...]
> >>>> +	ret = domain->ops->sva_device_init(dev, features, &min_pasid,
> >>>> +					   &max_pasid);
> >>>> +	if (ret)
> >>>> +		return ret;
> >>>> +
> >>>> +	/* FIXME: racy. Next version should have a mutex (same as fault
> >>>> handler) */
> >>>> +	dev_param->sva_features = features;
> >>>> +	dev_param->min_pasid = min_pasid;
> >>>> +	dev_param->max_pasid = max_pasid;
> >>>
> >>> what's the point of min_pasid here?
> >>
> >> Arm SMMUv3 uses entry 0 of the PASID table for the default (non-pasid)
> >> context, so it needs to set min_pasid to 1. AMD IOMMU recently added
> a
> >> similar feature (GIoSup), if I understood correctly.
> >>
> >
> > just for such purpose maybe we should just define a reserved_pasid
> > otherwise there will be some waste if an implementation allows it
> > non-zero.
> 
> What's wasted? It's slightly simpler to use min_pasid because we just pass
> that limit to idr_alloc(). With a reserved_pasid we'll have to call
> idr_alloc(reserved_pasid) once, for the same result.
> 

I'm thinking about the case where an implementation allows
software to define a random reserved_pasid, then banning
all pasids below reserved one could be a waste. But after
more thinking it is not a big problem. We can request such
driver to use 0 as reserved_pasid then same situation as
ARM side.

Thanks
Kevin
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jacob Pan Feb. 27, 2018, 6:51 p.m. UTC | #19
On Mon, 12 Feb 2018 18:33:31 +0000
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:

> Add a small API within the IOMMU subsystem to handle different
> formats of PASID tables. It uses the same principle as io-pgtable:
> 
> * The IOMMU driver registers a PASID table with some invalidation
>   callbacks.
> * The pasid-table lib allocates a set of tables of the right format,
> and returns an iommu_pasid_table_ops structure.
> * The IOMMU driver allocates entries and writes them using the
> provided ops.
> * The pasid-table lib calls the IOMMU driver back for invalidation
> when necessary.
> * The IOMMU driver unregisters the ops which frees the tables when
>   finished.
> 
> An example user will be Arm SMMU in a subsequent patch.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/Kconfig       |   8 +++
>  drivers/iommu/Makefile      |   1 +
>  drivers/iommu/iommu-pasid.c |  53 +++++++++++++++++
>  drivers/iommu/iommu-pasid.h | 142
> ++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 204
> insertions(+) create mode 100644 drivers/iommu/iommu-pasid.c
>  create mode 100644 drivers/iommu/iommu-pasid.h
> 
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index e751bb9958ba..8add90ba9b75 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -60,6 +60,14 @@ config IOMMU_IO_PGTABLE_ARMV7S_SELFTEST
>  
>  endmenu
>  
> +menu "Generic PASID table support"
> +
> +# Selected by the actual PASID table implementations
> +config IOMMU_PASID_TABLE
> +	bool
> +
> +endmenu
> +
>  config IOMMU_IOVA
>  	tristate
>  
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index f4324e29035e..338e59c93131 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -8,6 +8,7 @@ obj-$(CONFIG_IOMMU_FAULT) += io-pgfault.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
> +obj-$(CONFIG_IOMMU_PASID_TABLE) += iommu-pasid.o
>  obj-$(CONFIG_IOMMU_IOVA) += iova.o
>  obj-$(CONFIG_OF_IOMMU)	+= of_iommu.o
>  obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o
> diff --git a/drivers/iommu/iommu-pasid.c b/drivers/iommu/iommu-pasid.c
> new file mode 100644
> index 000000000000..6b21d369d514
> --- /dev/null
> +++ b/drivers/iommu/iommu-pasid.c
> @@ -0,0 +1,53 @@
> +/*
> + * PASID table management for the IOMMU
> + *
> + * Copyright (C) 2018 ARM Ltd.
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +
> +#include <linux/kernel.h>
> +
> +#include "iommu-pasid.h"
> +
> +static const struct iommu_pasid_init_fns *
> +pasid_table_init_fns[PASID_TABLE_NUM_FMTS] = {
> +};
> +
> +struct iommu_pasid_table_ops *
> +iommu_alloc_pasid_ops(enum iommu_pasid_table_fmt fmt,
> +		      struct iommu_pasid_table_cfg *cfg, void
> *cookie) +{
I guess you don't need to pass in cookie here.
> +	struct iommu_pasid_table *table;
> +	const struct iommu_pasid_init_fns *fns;
> +
> +	if (fmt >= PASID_TABLE_NUM_FMTS)
> +		return NULL;
> +
> +	fns = pasid_table_init_fns[fmt];
> +	if (!fns)
> +		return NULL;
> +
> +	table = fns->alloc(cfg, cookie);
> +	if (!table)
> +		return NULL;
> +
> +	table->fmt = fmt;
> +	table->cookie = cookie;
> +	table->cfg = *cfg;
> +
the ops is already IOMMU model specific, why do you need to pass cfg
back?
> +	return &table->ops;
If there is no common code that uses these ops, I don't see the benefit
of having these APIs. Or the plan is to consolidate even further such
that referene to pasid table can be attached at per iommu_domain etc,
but that would be model specific choice.

Jacob 
> +}
> +
> +void iommu_free_pasid_ops(struct iommu_pasid_table_ops *ops)
> +{
> +	struct iommu_pasid_table *table;
> +
> +	if (!ops)
> +		return;
> +
> +	table = container_of(ops, struct iommu_pasid_table, ops);
> +	iommu_pasid_flush_all(table);
> +	pasid_table_init_fns[table->fmt]->free(table);
> +}
> diff --git a/drivers/iommu/iommu-pasid.h b/drivers/iommu/iommu-pasid.h
> new file mode 100644
> index 000000000000..40a27d35c1e0
> --- /dev/null
> +++ b/drivers/iommu/iommu-pasid.h
> @@ -0,0 +1,142 @@
> +/*
> + * PASID table management for the IOMMU
> + *
> + * Copyright (C) 2017 ARM Ltd.
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +#ifndef __IOMMU_PASID_H
> +#define __IOMMU_PASID_H
> +
> +#include <linux/types.h>
> +#include "io-pgtable.h"
> +
> +struct mm_struct;
> +
> +enum iommu_pasid_table_fmt {
> +	PASID_TABLE_NUM_FMTS,
> +};
> +
> +/**
> + * iommu_pasid_entry - Entry of a PASID table
> + *
> + * @token:	architecture-specific data needed to uniquely
> identify the
> + *		entry. Most notably used for TLB invalidation
> + */
> +struct iommu_pasid_entry {
> +	u64		tag;
> +};
> +
> +/**
> + * iommu_pasid_table_ops - Operations on a PASID table
> + *
> + * @alloc_shared_entry:	allocate an entry for sharing an mm
> (SVA)
> + *			Returns the pointer to a new entry or an
> error
> + * @alloc_priv_entry:	allocate an entry for map/unmap
> operations
> + *			Returns the pointer to a new entry or an
> error
> + * @free_entry:		free an entry obtained with
> alloc_entry
> + * @set_entry:		write PASID table entry
> + * @clear_entry:	clear PASID table entry
> + */
> +struct iommu_pasid_table_ops {
> +	struct iommu_pasid_entry *
> +	(*alloc_shared_entry)(struct iommu_pasid_table_ops *ops,
> +			      struct mm_struct *mm);
> +	struct iommu_pasid_entry *
> +	(*alloc_priv_entry)(struct iommu_pasid_table_ops *ops,
> +			    enum io_pgtable_fmt fmt,
> +			    struct io_pgtable_cfg *cfg);
> +	void (*free_entry)(struct iommu_pasid_table_ops *ops,
> +			   struct iommu_pasid_entry *entry);
> +	int (*set_entry)(struct iommu_pasid_table_ops *ops, int
> pasid,
> +			 struct iommu_pasid_entry *entry);
> +	void (*clear_entry)(struct iommu_pasid_table_ops *ops, int
> pasid,
> +			    struct iommu_pasid_entry *entry);
> +};
> +
> +/**
> + * iommu_pasid_sync_ops - Callbacks into the IOMMU driver
> + *
> + * @cfg_flush:		flush cached configuration for one
> entry. For a
> + *			multi-level PASID table, 'leaf' tells
> whether to only
> + *			flush cached leaf entries or intermediate
> levels as
> + *			well.
> + * @cfg_flush_all:	flush cached configuration for all entries
> of the PASID
> + *			table
> + * @tlb_flush:		flush TLB entries for one entry
> + */
> +struct iommu_pasid_sync_ops {
> +	void (*cfg_flush)(void *cookie, int pasid, bool leaf);
> +	void (*cfg_flush_all)(void *cookie);
> +	void (*tlb_flush)(void *cookie, int pasid,
> +			  struct iommu_pasid_entry *entry);
> +};
> +
> +/**
> + * struct iommu_pasid_table_cfg - Configuration data for a set of
> PASID tables.
> + *
> + * @iommu_dev	device performing the DMA table walks
> + * @order:	number of PASID bits, set by IOMMU driver
> + * @flush:	TLB management callbacks for this set of tables.
> + *
> + * @base:	DMA address of the allocated table, set by the
> allocator.
> + */
> +struct iommu_pasid_table_cfg {
> +	struct device			*iommu_dev;
> +	size_t				order;
> +	const struct iommu_pasid_sync_ops *sync;
> +
> +	dma_addr_t			base;
> +};
> +
> +struct iommu_pasid_table_ops *
> +iommu_alloc_pasid_ops(enum iommu_pasid_table_fmt fmt,
> +		      struct iommu_pasid_table_cfg *cfg,
> +		      void *cookie);
> +void iommu_free_pasid_ops(struct iommu_pasid_table_ops *ops);
> +
> +/**
> + * struct iommu_pasid_table - describes a set of PASID tables
> + *
> + * @fmt:	The PASID table format.
> + * @cookie:	An opaque token provided by the IOMMU driver and
> passed back to
> + *		any callback routine.
> + * @cfg:	A copy of the PASID table configuration.
> + * @ops:	The PASID table operations in use for this set of
> page tables.
> + */
> +struct iommu_pasid_table {
> +	enum iommu_pasid_table_fmt	fmt;
> +	void				*cookie;
> +	struct iommu_pasid_table_cfg	cfg;
> +	struct iommu_pasid_table_ops	ops;
> +};
> +
> +#define iommu_pasid_table_ops_to_table(ops) \
> +	container_of((ops), struct iommu_pasid_table, ops)
> +
> +struct iommu_pasid_init_fns {
> +	struct iommu_pasid_table *(*alloc)(struct
> iommu_pasid_table_cfg *cfg,
> +					   void *cookie);
> +	void (*free)(struct iommu_pasid_table *table);
> +};
> +
> +static inline void iommu_pasid_flush_all(struct iommu_pasid_table
> *table) +{
> +	table->cfg.sync->cfg_flush_all(table->cookie);
> +}
> +
> +static inline void iommu_pasid_flush(struct iommu_pasid_table *table,
> +					 int pasid, bool leaf)
> +{
> +	table->cfg.sync->cfg_flush(table->cookie, pasid, leaf);
> +}
> +
> +static inline void iommu_pasid_flush_tlbs(struct iommu_pasid_table
> *table,
> +					  int pasid,
> +					  struct iommu_pasid_entry
> *entry) +{
> +	table->cfg.sync->tlb_flush(table->cookie, pasid, entry);
> +}
> +
> +#endif /* __IOMMU_PASID_H */

[Jacob Pan]
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sinan Kaya Feb. 28, 2018, 1:26 a.m. UTC | #20
On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
> Add two new ioctl for VFIO containers. VFIO_IOMMU_BIND_PROCESS creates a
> bond between a container and a process address space, identified by a
> device-specific ID named PASID. This allows the device to target DMA
> transactions at the process virtual addresses without a need for mapping
> and unmapping buffers explicitly in the IOMMU. The process page tables are
> shared with the IOMMU, and mechanisms such as PCI ATS/PRI are used to
> handle faults. VFIO_IOMMU_UNBIND_PROCESS removes a bond created with
> VFIO_IOMMU_BIND_PROCESS.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 399 ++++++++++++++++++++++++++++++++++++++++
>  include/uapi/linux/vfio.h       |  76 ++++++++
>  2 files changed, 475 insertions(+)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index e30e29ae4819..cac066f0026b 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -30,6 +30,7 @@
>  #include <linux/iommu.h>
>  #include <linux/module.h>
>  #include <linux/mm.h>
> +#include <linux/ptrace.h>
>  #include <linux/rbtree.h>
>  #include <linux/sched/signal.h>
>  #include <linux/sched/mm.h>
> @@ -60,6 +61,7 @@ MODULE_PARM_DESC(disable_hugepages,
>  
>  struct vfio_iommu {
>  	struct list_head	domain_list;
> +	struct list_head	mm_list;
>  	struct vfio_domain	*external_domain; /* domain for external user */
>  	struct mutex		lock;
>  	struct rb_root		dma_list;
> @@ -90,6 +92,15 @@ struct vfio_dma {
>  struct vfio_group {
>  	struct iommu_group	*iommu_group;
>  	struct list_head	next;
> +	bool			sva_enabled;
> +};
> +
> +struct vfio_mm {
> +#define VFIO_PASID_INVALID	(-1)
> +	spinlock_t		lock;
> +	int			pasid;
> +	struct mm_struct	*mm;
> +	struct list_head	next;
>  };
>  
>  /*
> @@ -1117,6 +1128,157 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
>  	return 0;
>  }
>  
> +static int vfio_iommu_mm_exit(struct device *dev, int pasid, void *data)
> +{
> +	struct vfio_mm *vfio_mm = data;
> +
> +	/*
> +	 * The mm_exit callback cannot block, so we can't take the iommu mutex
> +	 * and remove this vfio_mm from the list. Hopefully the SVA code will
> +	 * relax its locking requirement in the future.
> +	 *
> +	 * We mostly care about attach_group, which will attempt to replay all
> +	 * binds in this container. Ensure that it doesn't touch this defunct mm
> +	 * struct, by clearing the pointer. The structure will be freed when the
> +	 * group is removed from the container.
> +	 */
> +	spin_lock(&vfio_mm->lock);
> +	vfio_mm->mm = NULL;
> +	spin_unlock(&vfio_mm->lock);
> +
> +	return 0;
> +}
> +
> +static int vfio_iommu_sva_init(struct device *dev, void *data)
> +{

data is not getting used.

> +
> +	int ret;
> +
> +	ret = iommu_sva_device_init(dev, IOMMU_SVA_FEAT_PASID |
> +				    IOMMU_SVA_FEAT_IOPF, 0);
> +	if (ret)
> +		return ret;
> +
> +	return iommu_register_mm_exit_handler(dev, vfio_iommu_mm_exit);
> +}
> +
> +static int vfio_iommu_sva_shutdown(struct device *dev, void *data)
> +{
> +	iommu_sva_device_shutdown(dev);
> +	iommu_unregister_mm_exit_handler(dev);
> +
> +	return 0;
> +}
> +
> +static int vfio_iommu_bind_group(struct vfio_iommu *iommu,
> +				 struct vfio_group *group,
> +				 struct vfio_mm *vfio_mm)
> +{
> +	int ret;
> +	int pasid;
> +
> +	if (!group->sva_enabled) {
> +		ret = iommu_group_for_each_dev(group->iommu_group, NULL,
> +					       vfio_iommu_sva_init);
> +		if (ret)
> +			return ret;
> +
> +		group->sva_enabled = true;
> +	}
> +
> +	ret = iommu_sva_bind_group(group->iommu_group, vfio_mm->mm, &pasid,
> +				   IOMMU_SVA_FEAT_PASID | IOMMU_SVA_FEAT_IOPF,
> +				   vfio_mm);
> +	if (ret)
> +		return ret;

don't you need to clean up the work done by vfio_iommu_sva_init() here.

> +
> +	if (WARN_ON(vfio_mm->pasid != VFIO_PASID_INVALID && pasid !=
> +		    vfio_mm->pasid))
> +		return -EFAULT;
> +
> +	vfio_mm->pasid = pasid;
> +
> +	return 0;
> +}
> +
> +static void vfio_iommu_unbind_group(struct vfio_group *group,
> +				    struct vfio_mm *vfio_mm)
> +{
> +	iommu_sva_unbind_group(group->iommu_group, vfio_mm->pasid);
> +}
> +
> +static void vfio_iommu_unbind(struct vfio_iommu *iommu,
> +			      struct vfio_mm *vfio_mm)
> +{
> +	struct vfio_group *group;
> +	struct vfio_domain *domain;
> +
> +	list_for_each_entry(domain, &iommu->domain_list, next)
> +		list_for_each_entry(group, &domain->group_list, next)
> +			vfio_iommu_unbind_group(group, vfio_mm);
> +}
> +
> +static bool vfio_mm_get(struct vfio_mm *vfio_mm)
> +{
> +	bool ret;
> +
> +	spin_lock(&vfio_mm->lock);
> +	ret = vfio_mm->mm && mmget_not_zero(vfio_mm->mm);
> +	spin_unlock(&vfio_mm->lock);
> +
> +	return ret;
> +}
> +
> +static void vfio_mm_put(struct vfio_mm *vfio_mm)
> +{
> +	mmput(vfio_mm->mm);
> +}
> +
> +static int vfio_iommu_replay_bind(struct vfio_iommu *iommu, struct vfio_group *group)
> +{
> +	int ret = 0;
> +	struct vfio_mm *vfio_mm;
> +
> +	list_for_each_entry(vfio_mm, &iommu->mm_list, next) {
> +		/*
> +		 * Ensure mm doesn't exit while we're binding it to the new
> +		 * group.
> +		 */
> +		if (!vfio_mm_get(vfio_mm))
> +			continue;
> +		ret = vfio_iommu_bind_group(iommu, group, vfio_mm);
> +		vfio_mm_put(vfio_mm);
> +
> +		if (ret)
> +			goto out_unbind;
> +	}
> +
> +	return 0;
> +
> +out_unbind:
> +	list_for_each_entry_continue_reverse(vfio_mm, &iommu->mm_list, next) {
> +		if (!vfio_mm_get(vfio_mm))
> +			continue;
> +		iommu_sva_unbind_group(group->iommu_group, vfio_mm->pasid);
> +		vfio_mm_put(vfio_mm);
> +	}
> +
> +	return ret;
> +}
> +
> +static void vfio_iommu_free_all_mm(struct vfio_iommu *iommu)
> +{
> +	struct vfio_mm *vfio_mm, *tmp;
> +
> +	/*
> +	 * No need for unbind() here. Since all groups are detached from this
> +	 * iommu, bonds have been removed.
> +	 */
> +	list_for_each_entry_safe(vfio_mm, tmp, &iommu->mm_list, next)
> +		kfree(vfio_mm);
> +	INIT_LIST_HEAD(&iommu->mm_list);
> +}
> +
>  /*
>   * We change our unmap behavior slightly depending on whether the IOMMU
>   * supports fine-grained superpages.  IOMMUs like AMD-Vi will use a superpage
> @@ -1301,6 +1463,15 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>  		    d->prot == domain->prot) {
>  			iommu_detach_group(domain->domain, iommu_group);
>  			if (!iommu_attach_group(d->domain, iommu_group)) {
> +				if (vfio_iommu_replay_bind(iommu, group)) {
> +					iommu_detach_group(d->domain, iommu_group);
> +					ret = iommu_attach_group(domain->domain,
> +								 iommu_group);
> +					if (ret)
> +						goto out_domain;
> +					continue;
> +				}
> +
>  				list_add(&group->next, &d->group_list);
>  				iommu_domain_free(domain->domain);
>  				kfree(domain);
> @@ -1321,6 +1492,10 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>  	if (ret)
>  		goto out_detach;
>  
> +	ret = vfio_iommu_replay_bind(iommu, group);
> +	if (ret)
> +		goto out_detach;
> +
>  	if (resv_msi) {
>  		ret = iommu_get_msi_cookie(domain->domain, resv_msi_base);
>  		if (ret)
> @@ -1426,6 +1601,11 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  			continue;
>  
>  		iommu_detach_group(domain->domain, iommu_group);
> +		if (group->sva_enabled) {
> +			iommu_group_for_each_dev(iommu_group, NULL,
> +						 vfio_iommu_sva_shutdown);
> +			group->sva_enabled = false;
> +		}
>  		list_del(&group->next);
>  		kfree(group);
>  		/*
> @@ -1441,6 +1621,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  					vfio_iommu_unmap_unpin_all(iommu);
>  				else
>  					vfio_iommu_unmap_unpin_reaccount(iommu);
> +				vfio_iommu_free_all_mm(iommu);
>  			}
>  			iommu_domain_free(domain->domain);
>  			list_del(&domain->next);
> @@ -1475,6 +1656,7 @@ static void *vfio_iommu_type1_open(unsigned long arg)
>  	}
>  
>  	INIT_LIST_HEAD(&iommu->domain_list);
> +	INIT_LIST_HEAD(&iommu->mm_list);
>  	iommu->dma_list = RB_ROOT;
>  	mutex_init(&iommu->lock);
>  	BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier);
> @@ -1509,6 +1691,7 @@ static void vfio_iommu_type1_release(void *iommu_data)
>  		kfree(iommu->external_domain);
>  	}
>  
> +	vfio_iommu_free_all_mm(iommu);
>  	vfio_iommu_unmap_unpin_all(iommu);
>  
>  	list_for_each_entry_safe(domain, domain_tmp,
> @@ -1537,6 +1720,184 @@ static int vfio_domains_have_iommu_cache(struct vfio_iommu *iommu)
>  	return ret;
>  }
>  
> +static struct mm_struct *vfio_iommu_get_mm_by_vpid(pid_t vpid)
> +{
> +	struct mm_struct *mm;
> +	struct task_struct *task;
> +
> +	rcu_read_lock();
> +	task = find_task_by_vpid(vpid);
> +	if (task)
> +		get_task_struct(task);
> +	rcu_read_unlock();
> +	if (!task)
> +		return ERR_PTR(-ESRCH);
> +
> +	/* Ensure that current has RW access on the mm */
> +	mm = mm_access(task, PTRACE_MODE_ATTACH_REALCREDS);
> +	put_task_struct(task);
> +
> +	if (!mm)
> +		return ERR_PTR(-ESRCH);
> +
> +	return mm;
> +}
> +
> +static long vfio_iommu_type1_bind_process(struct vfio_iommu *iommu,
> +					  void __user *arg,
> +					  struct vfio_iommu_type1_bind *bind)
> +{
> +	struct vfio_iommu_type1_bind_process params;
> +	struct vfio_domain *domain;
> +	struct vfio_group *group;
> +	struct vfio_mm *vfio_mm;
> +	struct mm_struct *mm;
> +	unsigned long minsz;
> +	int ret = 0;
> +
> +	minsz = sizeof(*bind) + sizeof(params);
> +	if (bind->argsz < minsz)
> +		return -EINVAL;
> +
> +	arg += sizeof(*bind);
> +	if (copy_from_user(&params, arg, sizeof(params)))
> +		return -EFAULT;
> +
> +	if (params.flags & ~VFIO_IOMMU_BIND_PID)
> +		return -EINVAL;
> +
> +	if (params.flags & VFIO_IOMMU_BIND_PID) {
> +		mm = vfio_iommu_get_mm_by_vpid(params.pid);
> +		if (IS_ERR(mm))
> +			return PTR_ERR(mm);
> +	} else {
> +		mm = get_task_mm(current);
> +		if (!mm)
> +			return -EINVAL;
> +	}

I think you can merge mm failure in both states.

> +
> +	mutex_lock(&iommu->lock);
> +	if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) {
> +		ret = -EINVAL;
> +		goto out_put_mm;
> +	}
> +
> +	list_for_each_entry(vfio_mm, &iommu->mm_list, next) {
> +		if (vfio_mm->mm != mm)
> +			continue;
> +
> +		params.pasid = vfio_mm->pasid;
> +
> +		ret = copy_to_user(arg, &params, sizeof(params)) ? -EFAULT : 0;
> +		goto out_put_mm;
> +	}
> +
> +	vfio_mm = kzalloc(sizeof(*vfio_mm), GFP_KERNEL);
> +	if (!vfio_mm) {
> +		ret = -ENOMEM;
> +		goto out_put_mm;
> +	}
> +
> +	vfio_mm->mm = mm;
> +	vfio_mm->pasid = VFIO_PASID_INVALID;
> +	spin_lock_init(&vfio_mm->lock);
> +
> +	list_for_each_entry(domain, &iommu->domain_list, next) {
> +		list_for_each_entry(group, &domain->group_list, next) {
> +			ret = vfio_iommu_bind_group(iommu, group, vfio_mm);
> +			if (ret)
> +				break;
> +		}
> +		if (ret)
> +			break;
> +	}
> +
> +	if (ret) {
> +		/* Undo all binds that already succeeded */
> +		list_for_each_entry_continue_reverse(group, &domain->group_list,
> +						     next)
> +			vfio_iommu_unbind_group(group, vfio_mm);
> +		list_for_each_entry_continue_reverse(domain, &iommu->domain_list,
> +						     next)
> +			list_for_each_entry(group, &domain->group_list, next)
> +				vfio_iommu_unbind_group(group, vfio_mm);
> +		kfree(vfio_mm);
> +	} else {
> +		list_add(&vfio_mm->next, &iommu->mm_list);
> +
> +		params.pasid = vfio_mm->pasid;
> +		ret = copy_to_user(arg, &params, sizeof(params)) ? -EFAULT : 0;
> +		if (ret) {
> +			vfio_iommu_unbind(iommu, vfio_mm);
> +			kfree(vfio_mm);
> +		}
> +	}
> +
> +out_put_mm:
> +	mutex_unlock(&iommu->lock);
> +	mmput(mm);
> +
> +	return ret;
> +}
> +
> +static long vfio_iommu_type1_unbind_process(struct vfio_iommu *iommu,
> +					    void __user *arg,
> +					    struct vfio_iommu_type1_bind *bind)
> +{
> +	int ret = -EINVAL;
> +	unsigned long minsz;
> +	struct mm_struct *mm;
> +	struct vfio_mm *vfio_mm;
> +	struct vfio_iommu_type1_bind_process params;
> +
> +	minsz = sizeof(*bind) + sizeof(params);
> +	if (bind->argsz < minsz)
> +		return -EINVAL;
> +
> +	arg += sizeof(*bind);
> +	if (copy_from_user(&params, arg, sizeof(params)))
> +		return -EFAULT;
> +
> +	if (params.flags & ~VFIO_IOMMU_BIND_PID)
> +		return -EINVAL;
> +
> +	/*
> +	 * We can't simply unbind a foreign process by PASID, because the
> +	 * process might have died and the PASID might have been reallocated to
> +	 * another process. Instead we need to fetch that process mm by PID
> +	 * again to make sure we remove the right vfio_mm. In addition, holding
> +	 * the mm guarantees that mm_users isn't dropped while we unbind and the
> +	 * exit_mm handler doesn't fire. While not strictly necessary, not
> +	 * having to care about that race simplifies everyone's life.
> +	 */
> +	if (params.flags & VFIO_IOMMU_BIND_PID) {
> +		mm = vfio_iommu_get_mm_by_vpid(params.pid);
> +		if (IS_ERR(mm))
> +			return PTR_ERR(mm);
> +	} else {
> +		mm = get_task_mm(current);
> +		if (!mm)
> +			return -EINVAL;
> +	}
> +

I think you can merge mm failure in both states.

> +	ret = -ESRCH;
> +	mutex_lock(&iommu->lock);
> +	list_for_each_entry(vfio_mm, &iommu->mm_list, next) {
> +		if (vfio_mm->mm != mm)
> +			continue;
> +

these loops look wierd 
1. for loops + break 
2. for loops + goto

how about closing the for loop here. and then return here if not vfio_mm
not found.


> +		vfio_iommu_unbind(iommu, vfio_mm);
> +		list_del(&vfio_mm->next);
> +		kfree(vfio_mm);
> +		ret = 0;
> +		break;
> +	}
> +	mutex_unlock(&iommu->lock);
> +	mmput(mm);
> +
> +	return ret;
> +}
> +
Jean-Philippe Brucker Feb. 28, 2018, 4:20 p.m. UTC | #21
On 27/02/18 06:21, Tian, Kevin wrote:
[...]
>> Technically nothing prevents it, but now the resv problem discussed on
>> patch 2/37 stands out. For example on x86 you'd probably need to carve
>> the
>> IOAPIC MSI range out of the process address space. On Arm you'd need to
>> create a write-only mapping for MSIs (IOMMU translates it to the IRQ chip
>> address, but thankfully accessing the doorbell from CPU side doesn't
>> trigger an MSI.)
> 
> so if overlap already exists when binding a process address space
> (since binding may happen much later than creating the process),
> I assume the call will simply fail since carve out at this point is not
> possible?

Yes in this case I think it's safer to abort the bind() call

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker Feb. 28, 2018, 4:25 p.m. UTC | #22
On 28/02/18 01:26, Sinan Kaya wrote:
[...]
>> +static int vfio_iommu_sva_init(struct device *dev, void *data)
>> +{
> 
> data is not getting used.

That's the pointer passed to "iommu_group_for_each_dev", NULL at the
moment. Next version of this patch will keep some state in data to
ensure one device per group.

>> +
>> +	int ret;
>> +
>> +	ret = iommu_sva_device_init(dev, IOMMU_SVA_FEAT_PASID |
>> +				    IOMMU_SVA_FEAT_IOPF, 0);
>> +	if (ret)
>> +		return ret;
>> +
>> +	return iommu_register_mm_exit_handler(dev, vfio_iommu_mm_exit);
>> +}
>> +
>> +static int vfio_iommu_sva_shutdown(struct device *dev, void *data)
>> +{
>> +	iommu_sva_device_shutdown(dev);
>> +	iommu_unregister_mm_exit_handler(dev);
>> +
>> +	return 0;
>> +}
>> +
>> +static int vfio_iommu_bind_group(struct vfio_iommu *iommu,
>> +				 struct vfio_group *group,
>> +				 struct vfio_mm *vfio_mm)
>> +{
>> +	int ret;
>> +	int pasid;
>> +
>> +	if (!group->sva_enabled) {
>> +		ret = iommu_group_for_each_dev(group->iommu_group, NULL,
>> +					       vfio_iommu_sva_init);
>> +		if (ret)
>> +			return ret;
>> +
>> +		group->sva_enabled = true;
>> +	}
>> +
>> +	ret = iommu_sva_bind_group(group->iommu_group, vfio_mm->mm, &pasid,
>> +				   IOMMU_SVA_FEAT_PASID | IOMMU_SVA_FEAT_IOPF,
>> +				   vfio_mm);
>> +	if (ret)
>> +		return ret;
> 
> don't you need to clean up the work done by vfio_iommu_sva_init() here.

Yes I suppose we can, if we enabled during this bind

[...]
>> +static long vfio_iommu_type1_bind_process(struct vfio_iommu *iommu,
>> +					  void __user *arg,
>> +					  struct vfio_iommu_type1_bind *bind)
>> +{
>> +	struct vfio_iommu_type1_bind_process params;
>> +	struct vfio_domain *domain;
>> +	struct vfio_group *group;
>> +	struct vfio_mm *vfio_mm;
>> +	struct mm_struct *mm;
>> +	unsigned long minsz;
>> +	int ret = 0;
>> +
>> +	minsz = sizeof(*bind) + sizeof(params);
>> +	if (bind->argsz < minsz)
>> +		return -EINVAL;
>> +
>> +	arg += sizeof(*bind);
>> +	if (copy_from_user(&params, arg, sizeof(params)))
>> +		return -EFAULT;
>> +
>> +	if (params.flags & ~VFIO_IOMMU_BIND_PID)
>> +		return -EINVAL;
>> +
>> +	if (params.flags & VFIO_IOMMU_BIND_PID) {
>> +		mm = vfio_iommu_get_mm_by_vpid(params.pid);
>> +		if (IS_ERR(mm))
>> +			return PTR_ERR(mm);
>> +	} else {
>> +		mm = get_task_mm(current);
>> +		if (!mm)
>> +			return -EINVAL;
>> +	}
> 
> I think you can merge mm failure in both states.

Yes, I think vfio_iommu_get_mm_by_vpid could return NULL instead of an
error pointer, and we can throw -ESRCH in all cases (the existing
get_task_mm() failure in this driver does return -ESRCH, so it would be
consistent.)

[...]
>> +	/*
>> +	 * We can't simply unbind a foreign process by PASID, because the
>> +	 * process might have died and the PASID might have been reallocated to
>> +	 * another process. Instead we need to fetch that process mm by PID
>> +	 * again to make sure we remove the right vfio_mm. In addition, holding
>> +	 * the mm guarantees that mm_users isn't dropped while we unbind and the
>> +	 * exit_mm handler doesn't fire. While not strictly necessary, not
>> +	 * having to care about that race simplifies everyone's life.
>> +	 */
>> +	if (params.flags & VFIO_IOMMU_BIND_PID) {
>> +		mm = vfio_iommu_get_mm_by_vpid(params.pid);
>> +		if (IS_ERR(mm))
>> +			return PTR_ERR(mm);
>> +	} else {
>> +		mm = get_task_mm(current);
>> +		if (!mm)
>> +			return -EINVAL;
>> +	}
>> +
> 
> I think you can merge mm failure in both states.

ok

>> +	ret = -ESRCH;
>> +	mutex_lock(&iommu->lock);
>> +	list_for_each_entry(vfio_mm, &iommu->mm_list, next) {
>> +		if (vfio_mm->mm != mm)
>> +			continue;
>> +
> 
> these loops look wierd 
> 1. for loops + break 
> 2. for loops + goto
> 
> how about closing the for loop here. and then return here if not vfio_mm
> not found.

ok

>> +		vfio_iommu_unbind(iommu, vfio_mm);
>> +		list_del(&vfio_mm->next);
>> +		kfree(vfio_mm);
>> +		ret = 0;
>> +		break;
>> +	}
>> +	mutex_unlock(&iommu->lock);
>> +	mmput(mm);
>> +
>> +	return ret;
>> +}
>> +
> 

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sinan Kaya Feb. 28, 2018, 8:34 p.m. UTC | #23
On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
> +int iommu_sva_unbind_group(struct iommu_group *group, int pasid)
> +{
> +	struct group_device *device;
> +
> +	mutex_lock(&group->mutex);
> +	list_for_each_entry(device, &group->devices, list)
> +		iommu_sva_unbind_device(device->dev, pasid);
> +	mutex_unlock(&group->mutex);
> +
> +	return 0;
> +}

I think we should handle the errors returned by iommu_sva_unbind_device() here
or at least print a warning if we want to still continue unbinding.
Yi Liu March 1, 2018, 3:03 a.m. UTC | #24
Hi Jean,

> From: Jean-Philippe Brucker [mailto:jean-philippe.brucker@arm.com]
> Sent: Thursday, February 15, 2018 8:41 PM
> Subject: Re: [PATCH 02/37] iommu/sva: Bind process address spaces to devices
> 
> On 13/02/18 23:34, Tian, Kevin wrote:
> >> From: Jean-Philippe Brucker
> >> Sent: Tuesday, February 13, 2018 8:57 PM
> >>
> >> On 13/02/18 07:54, Tian, Kevin wrote:
> >>>> From: Jean-Philippe Brucker
> >>>> Sent: Tuesday, February 13, 2018 2:33 AM
> >>>>
> >>>> Add bind() and unbind() operations to the IOMMU API. Device drivers
> >> can
> >>>> use them to share process page tables with their devices.
> >>>> bind_group() is provided for VFIO's convenience, as it needs to
> >>>> provide a coherent interface on containers. Other device drivers
> >>>> will most likely want to use bind_device(), which binds a single device in the
> group.
> >>>
> >>> I saw your bind_group implementation tries to bind the address space
> >>> for all devices within a group, which IMO has some problem. Based on
> >> PCIe
> >>> spec, packet routing on the bus doesn't take PASID into consideration.
> >>> since devices within same group cannot be isolated based on
> >>> requestor-
> >> ID
> >>> i.e. traffic not guaranteed going to IOMMU, enabling SVA on multiple
> >> devices
> >>> could cause undesired p2p.
> >> But so does enabling "classic" DMA... If two devices are not
> >> protected by ACS for example, they are put in the same IOMMU group,
> >> and one device might be able to snoop the other's DMA. VFIO allows
> >> userspace to create a container for them and use MAP/UNMAP, but makes
> >> it explicit to the user that for DMA, these devices are not isolated
> >> and must be considered as a single device (you can't pass them to
> >> different VMs or put them in different containers). So I tried to
> >> keep the same idea as MAP/UNMAP for SVA, performing BIND/UNBIND
> >> operations on the VFIO container instead of the device.
> >
> > there is a small difference. for classic DMA we can reserve PCI BARs
> > when allocating IOVA, thus multiple devices in the same group can
> > still work correctly applied with same translation, if isolation is
> > not cared in between. However for SVA it's CPU virtual addresses
> > managed by kernel mm thus difficult to introduce similar address
> > reservation. Then it's possible for a VA falling into other device's
> > BAR in the same group and cause undesired p2p traffic. In such regard,
> > SVA is actually functionally-broken.
> 
> I think the problem exists even if there is a single device in the group.
> If for example, malloc() returns a VA that corresponds to a PCI host bridge in IOVA
> space, performing DMA on that buffer won't reach the IOMMU and will cause
> undesirable side-effects.

If only a single device in a group, should it mean there is ACS support in
the path from this device to root complex? It means any memory request
from this device would be upstreamed to root complex, thus it should be
able to avoid undesired p2p traffics. So I intend to believe, even we do
bind in group level, we actually expect to make it work only for the case
where a single device within a group.

Thanks,
Yi Liu
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Baolu Lu March 1, 2018, 6:52 a.m. UTC | #25
Hi Jean,

On 02/13/2018 02:33 AM, Jean-Philippe Brucker wrote:
> Introduce boilerplate code for allocating IOMMU mm structures and binding
> them to devices. Four operations are added to IOMMU drivers:
>
> * mm_alloc(): to create an io_mm structure and perform architecture-
>   specific operations required to grab the process (for instance on ARM,
>   pin down the CPU ASID so that the process doesn't get assigned a new
>   ASID on rollover).
>
>   There is a single valid io_mm structure per Linux mm. Future extensions
>   may also use io_mm for kernel-managed address spaces, populated with
>   map()/unmap() calls instead of bound to process address spaces. This
>   patch focuses on "shared" io_mm.
>
> * mm_attach(): attach an mm to a device. The IOMMU driver checks that the
>   device is capable of sharing an address space, and writes the PASID
>   table entry to install the pgd.
>
>   Some IOMMU drivers will have a single PASID table per domain, for
>   convenience. Other can implement it differently but to help these
>   drivers, mm_attach and mm_detach take 'attach_domain' and
>   'detach_domain' parameters, that tell whether they need to set and clear
>   the PASID entry or only send the required TLB invalidations.
>
> * mm_detach(): detach an mm from a device. The IOMMU driver removes the
>   PASID table entry and invalidates the IOTLBs.
>
> * mm_free(): free a structure allocated by mm_alloc(), and let arch
>   release the process.
>
> mm_attach and mm_detach operations are serialized with a spinlock. At the
> moment it is global, but if we try to optimize it, the core should at
> least prevent concurrent attach()/detach() on the same domain (so
> multi-level PASID table code can allocate tables lazily). mm_alloc() can
> sleep, but mm_free must not (because we'll have to call it from call_srcu
> later on.)
>
> At the moment we use an IDR for allocating PASIDs and retrieving contexts.
> We also use a single spinlock. These can be refined and optimized later (a
> custom allocator will be needed for top-down PASID allocation).
>
> Keeping track of address spaces requires the use of MMU notifiers.
> Handling process exit with regard to unbind() is tricky, so it is left for
> another patch and we explicitly fail mm_alloc() for the moment.
>
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/iommu-sva.c | 382 +++++++++++++++++++++++++++++++++++++++++++++-
>  drivers/iommu/iommu.c     |   2 +
>  include/linux/iommu.h     |  25 +++
>  3 files changed, 406 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
> index 593685d891bf..f9af9d66b3ed 100644
> --- a/drivers/iommu/iommu-sva.c
> +++ b/drivers/iommu/iommu-sva.c
> @@ -7,11 +7,321 @@
>   * SPDX-License-Identifier: GPL-2.0
>   */
>  
> +#include <linux/idr.h>
>  #include <linux/iommu.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +
> +/**
> + * DOC: io_mm model
> + *
> + * The io_mm keeps track of process address spaces shared between CPU and IOMMU.
> + * The following example illustrates the relation between structures
> + * iommu_domain, io_mm and iommu_bond. An iommu_bond is a link between io_mm and
> + * device. A device can have multiple io_mm and an io_mm may be bound to
> + * multiple devices.
> + *              ___________________________
> + *             |  IOMMU domain A           |
> + *             |  ________________         |
> + *             | |  IOMMU group   |        +------- io_pgtables
> + *             | |                |        |
> + *             | |   dev 00:00.0 ----+------- bond --- io_mm X
> + *             | |________________|   \    |
> + *             |                       '----- bond ---.
> + *             |___________________________|           \
> + *              ___________________________             \
> + *             |  IOMMU domain B           |           io_mm Y
> + *             |  ________________         |           / /
> + *             | |  IOMMU group   |        |          / /
> + *             | |                |        |         / /
> + *             | |   dev 00:01.0 ------------ bond -' /
> + *             | |   dev 00:01.1 ------------ bond --'
> + *             | |________________|        |
> + *             |                           +------- io_pgtables
> + *             |___________________________|
> + *
> + * In this example, device 00:00.0 is in domain A, devices 00:01.* are in domain
> + * B. All devices within the same domain access the same address spaces. Device
> + * 00:00.0 accesses address spaces X and Y, each corresponding to an mm_struct.
> + * Devices 00:01.* only access address space Y. In addition each
> + * IOMMU_DOMAIN_DMA domain has a private address space, io_pgtable, that is
> + * managed with iommu_map()/iommu_unmap(), and isn't shared with the CPU MMU.
> + *
> + * To obtain the above configuration, users would for instance issue the
> + * following calls:
> + *
> + *     iommu_sva_bind_device(dev 00:00.0, mm X, ...) -> PASID 1
> + *     iommu_sva_bind_device(dev 00:00.0, mm Y, ...) -> PASID 2
> + *     iommu_sva_bind_device(dev 00:01.0, mm Y, ...) -> PASID 2
> + *     iommu_sva_bind_device(dev 00:01.1, mm Y, ...) -> PASID 2
> + *
> + * A single Process Address Space ID (PASID) is allocated for each mm. In the
> + * example, devices use PASID 1 to read/write into address space X and PASID 2
> + * to read/write into address space Y.
> + *
> + * Hardware tables describing this configuration in the IOMMU would typically
> + * look like this:
> + *
> + *                                PASID tables
> + *                                 of domain A
> + *                              .->+--------+
> + *                             / 0 |        |-------> io_pgtable
> + *                            /    +--------+
> + *            Device tables  /   1 |        |-------> pgd X
> + *              +--------+  /      +--------+
> + *      00:00.0 |      A |-'     2 |        |--.
> + *              +--------+         +--------+   \
> + *              :        :       3 |        |    \
> + *              +--------+         +--------+     --> pgd Y
> + *      00:01.0 |      B |--.                    /
> + *              +--------+   \                  |
> + *      00:01.1 |      B |----+   PASID tables  |
> + *              +--------+     \   of domain B  |
> + *                              '->+--------+   |
> + *                               0 |        |-- | --> io_pgtable
> + *                                 +--------+   |
> + *                               1 |        |   |
> + *                                 +--------+   |
> + *                               2 |        |---'
> + *                                 +--------+
> + *                               3 |        |
> + *                                 +--------+
> + *
> + * With this model, a single call binds all devices in a given domain to an
> + * address space. Other devices in the domain will get the same bond implicitly.
> + * However, users must issue one bind() for each device, because IOMMUs may
> + * implement SVA differently. Furthermore, mandating one bind() per device
> + * allows the driver to perform sanity-checks on device capabilities.
> + *
> + * On Arm and AMD IOMMUs, entry 0 of the PASID table can be used to hold
> + * non-PASID translations. In this case PASID 0 is reserved and entry 0 points
> + * to the io_pgtable base. On Intel IOMMU, the io_pgtable base would be held in
> + * the device table and PASID 0 would be available to the allocator.
> + */
>  
>  /* TODO: stub for the fault queue. Remove later. */
>  #define iommu_fault_queue_flush(...)
>  
> +struct iommu_bond {
> +	struct io_mm		*io_mm;
> +	struct device		*dev;
> +	struct iommu_domain	*domain;
> +
> +	struct list_head	mm_head;
> +	struct list_head	dev_head;
> +	struct list_head	domain_head;
> +
> +	void			*drvdata;
> +
> +	/* Number of bind() calls */
> +	refcount_t		refs;
> +};
> +
> +/*
> + * Because we're using an IDR, PASIDs are limited to 31 bits (the sign bit is
> + * used for returning errors). In practice implementations will use at most 20
> + * bits, which is the PCI limit.
> + */
> +static DEFINE_IDR(iommu_pasid_idr);
> +
> +/*
> + * For the moment this is an all-purpose lock. It serializes
> + * access/modifications to bonds, access/modifications to the PASID IDR, and
> + * changes to io_mm refcount as well.
> + */
> +static DEFINE_SPINLOCK(iommu_sva_lock);
> +
> +static struct io_mm *
> +io_mm_alloc(struct iommu_domain *domain, struct device *dev,
> +	    struct mm_struct *mm)
> +{
> +	int ret;
> +	int pasid;
> +	struct io_mm *io_mm;
> +	struct iommu_param *dev_param = dev->iommu_param;
> +
> +	if (!dev_param || !domain->ops->mm_alloc || !domain->ops->mm_free)
> +		return ERR_PTR(-ENODEV);
> +
> +	io_mm = domain->ops->mm_alloc(domain, mm);
> +	if (IS_ERR(io_mm))
> +		return io_mm;
> +	if (!io_mm)
> +		return ERR_PTR(-ENOMEM);
> +
> +	/*
> +	 * The mm must not be freed until after the driver frees the io_mm
> +	 * (which may involve unpinning the CPU ASID for instance, requiring a
> +	 * valid mm struct.)
> +	 */
> +	mmgrab(mm);
> +
> +	io_mm->mm		= mm;
> +	io_mm->release		= domain->ops->mm_free;
> +	INIT_LIST_HEAD(&io_mm->devices);
> +
> +	idr_preload(GFP_KERNEL);
> +	spin_lock(&iommu_sva_lock);
> +	pasid = idr_alloc_cyclic(&iommu_pasid_idr, io_mm, dev_param->min_pasid,
> +				 dev_param->max_pasid + 1, GFP_ATOMIC);

Can the pasid management code be moved into a common library?
PASID is not stick to SVA. An IOMMU model device could be designed
to use PASID for second level translation (classical DMA translation)
as well.

Best regards,
Lu Baolu
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christian König March 1, 2018, 8:04 a.m. UTC | #26
Am 01.03.2018 um 07:52 schrieb Lu Baolu:
> Hi Jean,
>
> On 02/13/2018 02:33 AM, Jean-Philippe Brucker wrote:
>> [SNIP]
>> +	pasid = idr_alloc_cyclic(&iommu_pasid_idr, io_mm, dev_param->min_pasid,
>> +				 dev_param->max_pasid + 1, GFP_ATOMIC);
> Can the pasid management code be moved into a common library?
> PASID is not stick to SVA. An IOMMU model device could be designed
> to use PASID for second level translation (classical DMA translation)
> as well.

Yeah, we have the same problem on amdgpu.

We assign PASIDs to clients even when IOMMU isn't present in the system 
just because we need it for debugging.

E.g. when the hardware detects that some shader program is doing 
something nasty we get the PASID in the interrupt and could for example 
use it to inform the client about the fault.

Regards,
Christian.

>
> Best regards,
> Lu Baolu

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker March 2, 2018, 12:32 p.m. UTC | #27
On 28/02/18 20:34, Sinan Kaya wrote:
> On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
>> +int iommu_sva_unbind_group(struct iommu_group *group, int pasid)
>> +{
>> +	struct group_device *device;
>> +
>> +	mutex_lock(&group->mutex);
>> +	list_for_each_entry(device, &group->devices, list)
>> +		iommu_sva_unbind_device(device->dev, pasid);
>> +	mutex_unlock(&group->mutex);
>> +
>> +	return 0;
>> +}
> 
> I think we should handle the errors returned by iommu_sva_unbind_device() here
> or at least print a warning if we want to still continue unbinding. 

Agreed, though bind_group/unbind_group are probably going away in next
series

Thanks,
Jean

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker March 2, 2018, 4:03 p.m. UTC | #28
On 01/03/18 03:03, Liu, Yi L wrote:
> Hi Jean,
> 
>> From: Jean-Philippe Brucker [mailto:jean-philippe.brucker@arm.com]
>> Sent: Thursday, February 15, 2018 8:41 PM
>> Subject: Re: [PATCH 02/37] iommu/sva: Bind process address spaces to devices
>>
>> On 13/02/18 23:34, Tian, Kevin wrote:
>>>> From: Jean-Philippe Brucker
>>>> Sent: Tuesday, February 13, 2018 8:57 PM
>>>>
>>>> On 13/02/18 07:54, Tian, Kevin wrote:
>>>>>> From: Jean-Philippe Brucker
>>>>>> Sent: Tuesday, February 13, 2018 2:33 AM
>>>>>>
>>>>>> Add bind() and unbind() operations to the IOMMU API. Device drivers
>>>> can
>>>>>> use them to share process page tables with their devices.
>>>>>> bind_group() is provided for VFIO's convenience, as it needs to
>>>>>> provide a coherent interface on containers. Other device drivers
>>>>>> will most likely want to use bind_device(), which binds a single device in the
>> group.
>>>>>
>>>>> I saw your bind_group implementation tries to bind the address space
>>>>> for all devices within a group, which IMO has some problem. Based on
>>>> PCIe
>>>>> spec, packet routing on the bus doesn't take PASID into consideration.
>>>>> since devices within same group cannot be isolated based on
>>>>> requestor-
>>>> ID
>>>>> i.e. traffic not guaranteed going to IOMMU, enabling SVA on multiple
>>>> devices
>>>>> could cause undesired p2p.
>>>> But so does enabling "classic" DMA... If two devices are not
>>>> protected by ACS for example, they are put in the same IOMMU group,
>>>> and one device might be able to snoop the other's DMA. VFIO allows
>>>> userspace to create a container for them and use MAP/UNMAP, but makes
>>>> it explicit to the user that for DMA, these devices are not isolated
>>>> and must be considered as a single device (you can't pass them to
>>>> different VMs or put them in different containers). So I tried to
>>>> keep the same idea as MAP/UNMAP for SVA, performing BIND/UNBIND
>>>> operations on the VFIO container instead of the device.
>>>
>>> there is a small difference. for classic DMA we can reserve PCI BARs
>>> when allocating IOVA, thus multiple devices in the same group can
>>> still work correctly applied with same translation, if isolation is
>>> not cared in between. However for SVA it's CPU virtual addresses
>>> managed by kernel mm thus difficult to introduce similar address
>>> reservation. Then it's possible for a VA falling into other device's
>>> BAR in the same group and cause undesired p2p traffic. In such regard,
>>> SVA is actually functionally-broken.
>>
>> I think the problem exists even if there is a single device in the group.
>> If for example, malloc() returns a VA that corresponds to a PCI host bridge in IOVA
>> space, performing DMA on that buffer won't reach the IOMMU and will cause
>> undesirable side-effects.
> 
> If only a single device in a group, should it mean there is ACS support in
> the path from this device to root complex? It means any memory request
> from this device would be upstreamed to root complex, thus it should be
> able to avoid undesired p2p traffics. So I intend to believe, even we do
> bind in group level, we actually expect to make it work only for the case
> where a single device within a group.

Yes if each device has its own group then ACS is properly enabled.

Even without thinking about ACS or p2p, all memory requests don't
necessarily make it to the IOMMU. For example transactions targeting the
PCI host bridge MMIO window (marked as RESV_RESERVED by dma-iommu.c),
may get eaten by the RC and not reach the IOMMU (I'm blindly following
the code here, don't have anything in the spec to back me up). Commit
fade1ec055dc also refers to "faults, corruption and other badness"
though I don't know if that's only for PCI or could also affect future
systems.

And I don't think prefixing transactions with a PASID changes the
situation. I couldn't find anything in the PCIe spec contradicting it
and I guess it's up to the root complex implementation. So I tend to
take a conservative approach and assume that RESV_RESERVED regions will
also apply to PASID-prefixed traffic.

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker March 2, 2018, 4:19 p.m. UTC | #29
On 01/03/18 06:52, Lu Baolu wrote:
> Can the pasid management code be moved into a common library?
> PASID is not stick to SVA. An IOMMU model device could be designed
> to use PASID for second level translation (classical DMA translation)
> as well.

What do you mean by second level translation? Do you see a use-case with
nesting translation within the host?

I agree that PASID + classical DMA is desirable. A device driver would
allocate PASIDs and perform iommu_sva_map(domain, pasid, iova, pa, size,
prot) and iommu_sva_unmap(domain, pasid, iova, size). I'm hoping that we
can also augment the DMA API with PASIDs, and that a driver can use both
shared and private contexts simultaneously. So that it can use a few
PASIDs for management purpose, and assign the rest to userspace.

The intent is for iommu-sva.c to be this common library. Work for
"private" PASID allocation is underway, see Jordan Crouse's series
posted last week https://www.spinics.net/lists/arm-kernel/msg635857.html

Thanks,
Jean

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker March 2, 2018, 4:42 p.m. UTC | #30
On 01/03/18 08:04, Christian König wrote:
> Am 01.03.2018 um 07:52 schrieb Lu Baolu:
>> Hi Jean,
>>
>> On 02/13/2018 02:33 AM, Jean-Philippe Brucker wrote:
>>> [SNIP]
>>> +	pasid = idr_alloc_cyclic(&iommu_pasid_idr, io_mm, dev_param->min_pasid,
>>> +				 dev_param->max_pasid + 1, GFP_ATOMIC);
>> Can the pasid management code be moved into a common library?
>> PASID is not stick to SVA. An IOMMU model device could be designed
>> to use PASID for second level translation (classical DMA translation)
>> as well.
> 
> Yeah, we have the same problem on amdgpu.
> 
> We assign PASIDs to clients even when IOMMU isn't present in the system 
> just because we need it for debugging.
> 
> E.g. when the hardware detects that some shader program is doing 
> something nasty we get the PASID in the interrupt and could for example 
> use it to inform the client about the fault.

This seems like a new requirement altogether, and a bit outside the
scope of this series to be honest. Is the client userspace in this
context? I guess it would be mostly for prototyping then? Otherwise you
probably don't want to hand GPU contexts to userspace without an IOMMU
isolating them?

If you don't need mm tracking/sharing or iommu_map/unmap, then maybe an
IDR private to the GPU driver would be enough? If you do need mm
tracking, I suppose we could modify iommu_sva_bind() to allocate and
track io_mm even if the given device doesn't have an IOMMU, but it seems
a bit backward.

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dongdong Liu March 5, 2018, 12:29 p.m. UTC | #31
>
> +static int arm_smmu_enable_pri(struct arm_smmu_master_data *master)
> +{
> +	int ret, pos;
> +	struct pci_dev *pdev;
> +	/*
> +	 * TODO: find a good inflight PPR number. We should divide the PRI queue
> +	 * by the number of PRI-capable devices, but it's impossible to know
> +	 * about current and future (hotplugged) devices. So we're at risk of
> +	 * dropping PPRs (and leaking pending requests in the FQ).
> +	 */
> +	size_t max_inflight_pprs = 16;
> +	struct arm_smmu_device *smmu = master->smmu;
> +
> +	if (!(smmu->features & ARM_SMMU_FEAT_PRI) || !dev_is_pci(master->dev))
> +		return -ENOSYS;
> +
> +	pdev = to_pci_dev(master->dev);
> +
 From here
> +	pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI);
> +	if (!pos)
> +		return -ENOSYS;
to here, seems this code is not needed as it is already done in
pci_reset_pri().

Thanks,
Dongdong
> +
> +	ret = pci_reset_pri(pdev);
> +	if (ret)
> +		return ret;
> +
> +	ret = pci_enable_pri(pdev, max_inflight_pprs);
> +	if (ret) {
> +		dev_err(master->dev, "cannot enable PRI: %d\n", ret);
> +		return ret;
> +	}
> +
> +	master->can_fault = true;
> +	master->ste.prg_resp_needs_ssid = pci_prg_resp_requires_prefix(pdev);
> +
> +	dev_dbg(master->dev, "enabled PRI");
> +
> +	return 0;
> +}
> +
>  static void arm_smmu_disable_ats(struct arm_smmu_master_data *master)
>  {
>  	struct pci_dev *pdev;
> @@ -2548,6 +2592,22 @@ static void arm_smmu_disable_ats(struct arm_smmu_master_data *master)
>  	pci_disable_ats(pdev);
>  }
>
> +static void arm_smmu_disable_pri(struct arm_smmu_master_data *master)
> +{
> +	struct pci_dev *pdev;
> +
> +	if (!dev_is_pci(master->dev))
> +		return;
> +
> +	pdev = to_pci_dev(master->dev);
> +
> +	if (!pdev->pri_enabled)
> +		return;
> +
> +	pci_disable_pri(pdev);
> +	master->can_fault = false;
> +}
> +
>  static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
>  				  struct arm_smmu_master_data *master)
>  {
> @@ -2668,12 +2728,13 @@ static int arm_smmu_add_device(struct device *dev)
>  		master->ste.can_stall = true;
>  	}
>
> -	arm_smmu_enable_ats(master);
> +	if (!arm_smmu_enable_ats(master))
> +		arm_smmu_enable_pri(master);
>
>  	group = iommu_group_get_for_dev(dev);
>  	if (IS_ERR(group)) {
>  		ret = PTR_ERR(group);
> -		goto err_disable_ats;
> +		goto err_disable_pri;
>  	}
>
>  	iommu_group_put(group);
> @@ -2682,7 +2743,8 @@ static int arm_smmu_add_device(struct device *dev)
>
>  	return 0;
>
> -err_disable_ats:
> +err_disable_pri:
> +	arm_smmu_disable_pri(master);
>  	arm_smmu_disable_ats(master);
>
>  	return ret;
> @@ -2702,6 +2764,8 @@ static void arm_smmu_remove_device(struct device *dev)
>  	if (master && master->ste.assigned)
>  		arm_smmu_detach_dev(dev);
>  	arm_smmu_remove_master(smmu, master);
> +
> +	arm_smmu_disable_pri(master);
>  	arm_smmu_disable_ats(master);
>
>  	iommu_group_remove_device(dev);
>

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker March 5, 2018, 1:09 p.m. UTC | #32
On 05/03/18 12:29, Dongdong Liu wrote:
>>
>> +static int arm_smmu_enable_pri(struct arm_smmu_master_data *master)
>> +{
>> +	int ret, pos;
>> +	struct pci_dev *pdev;
>> +	/*
>> +	 * TODO: find a good inflight PPR number. We should divide the PRI queue
>> +	 * by the number of PRI-capable devices, but it's impossible to know
>> +	 * about current and future (hotplugged) devices. So we're at risk of
>> +	 * dropping PPRs (and leaking pending requests in the FQ).
>> +	 */
>> +	size_t max_inflight_pprs = 16;
>> +	struct arm_smmu_device *smmu = master->smmu;
>> +
>> +	if (!(smmu->features & ARM_SMMU_FEAT_PRI) || !dev_is_pci(master->dev))
>> +		return -ENOSYS;
>> +
>> +	pdev = to_pci_dev(master->dev);
>> +
>  From here
>> +	pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI);
>> +	if (!pos)
>> +		return -ENOSYS;
> to here, seems this code is not needed as it is already done in
> pci_reset_pri().

Indeed, thanks. It would allow to differentiate a device that doesn't
support PRI from a reset error, but since we ignore the return value at
the moment I'll remove it.

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sinan Kaya March 5, 2018, 3:28 p.m. UTC | #33
On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
> +static void io_mm_free(struct io_mm *io_mm)
> +{
> +	struct mm_struct *mm;
> +	void (*release)(struct io_mm *);
> +
> +	release = io_mm->release;
> +	mm = io_mm->mm;
> +
> +	release(io_mm);

Is there any reason why you can't call iommu->release()
here directly? Why do you need the release local variable?

> +	mmdrop(mm);
> +}
> +
Sinan Kaya March 5, 2018, 9:44 p.m. UTC | #34
On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
> +static int iommu_queue_fault(struct iommu_domain *domain, struct device *dev,
> +			     struct iommu_fault_event *evt)
> +{
> +	struct iommu_fault_group *group;
> +	struct iommu_fault_context *fault, *next;
> +
> +	if (!iommu_fault_queue)
> +		return -ENOSYS;
> +
> +	if (!evt->last_req) {
> +		fault = kzalloc(sizeof(*fault), GFP_KERNEL);
> +		if (!fault)
> +			return -ENOMEM;
> +
> +		fault->evt = *evt;
> +		fault->dev = dev;
> +
> +		/* Non-last request of a group. Postpone until the last one */
> +		spin_lock(&iommu_partial_faults_lock);
> +		list_add_tail(&fault->head, &iommu_partial_faults);
> +		spin_unlock(&iommu_partial_faults_lock);
> +
> +		return IOMMU_PAGE_RESP_HANDLED;
> +	}
> +
> +	group = kzalloc(sizeof(*group), GFP_KERNEL);
> +	if (!group)
> +		return -ENOMEM;

Release the requests in iommu_partial_faults here.
Sinan Kaya March 5, 2018, 9:53 p.m. UTC | #35
On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
> +static struct workqueue_struct *iommu_fault_queue;

Is there anyway we can make this fault queue per struct device?
Since this is common code, I think it needs some care.
Jean-Philippe Brucker March 6, 2018, 10:24 a.m. UTC | #36
On 05/03/18 21:44, Sinan Kaya wrote:
> On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
>> +static int iommu_queue_fault(struct iommu_domain *domain, struct device *dev,
>> +			     struct iommu_fault_event *evt)
>> +{
>> +	struct iommu_fault_group *group;
>> +	struct iommu_fault_context *fault, *next;
>> +
>> +	if (!iommu_fault_queue)
>> +		return -ENOSYS;
>> +
>> +	if (!evt->last_req) {
>> +		fault = kzalloc(sizeof(*fault), GFP_KERNEL);
>> +		if (!fault)
>> +			return -ENOMEM;
>> +
>> +		fault->evt = *evt;
>> +		fault->dev = dev;
>> +
>> +		/* Non-last request of a group. Postpone until the last one */
>> +		spin_lock(&iommu_partial_faults_lock);
>> +		list_add_tail(&fault->head, &iommu_partial_faults);
>> +		spin_unlock(&iommu_partial_faults_lock);
>> +
>> +		return IOMMU_PAGE_RESP_HANDLED;
>> +	}
>> +
>> +	group = kzalloc(sizeof(*group), GFP_KERNEL);
>> +	if (!group)
>> +		return -ENOMEM;
> 
> Release the requests in iommu_partial_faults here.

We move these requests to the group->faults list (which btw should use
list_move instead of the current list_del+list_add) and we release them in
iommu_fault_handle_group()

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker March 6, 2018, 10:37 a.m. UTC | #37
On 05/03/18 15:28, Sinan Kaya wrote:
> On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
>> +static void io_mm_free(struct io_mm *io_mm)
>> +{
>> +	struct mm_struct *mm;
>> +	void (*release)(struct io_mm *);
>> +
>> +	release = io_mm->release;
>> +	mm = io_mm->mm;
>> +
>> +	release(io_mm);
> 
> Is there any reason why you can't call iommu->release()
> here directly? Why do you need the release local variable?

I think I can remove the local variable

Thanks,
Jean

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker March 6, 2018, 10:46 a.m. UTC | #38
On 05/03/18 21:53, Sinan Kaya wrote:
> On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
>> +static struct workqueue_struct *iommu_fault_queue;
> 
> Is there anyway we can make this fault queue per struct device?
> Since this is common code, I think it needs some care.

I don't think it's better, the workqueue struct seems large. Maybe having
one wq per IOMMU is a good compromise? As said in my other reply for this
patch, doing so isn't completely straightforward. I'll consider adding an
iommu pointer to the iommu_param struct attached to each device.

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sinan Kaya March 6, 2018, 12:52 p.m. UTC | #39
On 2018-03-06 05:46, Jean-Philippe Brucker wrote:
> On 05/03/18 21:53, Sinan Kaya wrote:
>> On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
>>> +static struct workqueue_struct *iommu_fault_queue;
>> 
>> Is there anyway we can make this fault queue per struct device?
>> Since this is common code, I think it needs some care.
> 
> I don't think it's better, the workqueue struct seems large. Maybe 
> having
> one wq per IOMMU is a good compromise?

Yes, one per iommu sounds reasonable.


As said in my other reply for this
> patch, doing so isn't completely straightforward. I'll consider adding 
> an
> iommu pointer to the iommu_param struct attached to each device.
> 
> Thanks,
> Jean
> --
> To unsubscribe from this list: send the line "unsubscribe linux-acpi" 
> in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jonathan Cameron March 8, 2018, 3:40 p.m. UTC | #40
On Mon, 12 Feb 2018 18:33:22 +0000
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:

> Some systems allow devices to handle IOMMU translation faults in the core
> mm. For example systems supporting the PCI PRI extension or Arm SMMU stall
> model. Infrastructure for reporting such recoverable page faults was
> recently added to the IOMMU core, for SVA virtualization. Extend
> iommu_report_device_fault() to handle host page faults as well.
> 
> * IOMMU drivers instantiate a fault workqueue, using
>   iommu_fault_queue_init() and iommu_fault_queue_destroy().
> 
> * When it receives a fault event, supposedly in an IRQ handler, the IOMMU
>   driver reports the fault using iommu_report_device_fault()
> 
> * If the device driver registered a handler (e.g. VFIO), pass down the
>   fault event. Otherwise submit it to the fault queue, to be handled in a
>   thread.
> 
> * When the fault corresponds to an io_mm, call the mm fault handler on it
>   (in next patch).
> 
> * Once the fault is handled, the mm wrapper or the device driver reports
>   success of failure with iommu_page_response(). The translation is either
>   retried or aborted, depending on the response code.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
A few really minor points inline...  Basically looks good to me.

> ---
>  drivers/iommu/Kconfig      |  10 ++
>  drivers/iommu/Makefile     |   1 +
>  drivers/iommu/io-pgfault.c | 282 +++++++++++++++++++++++++++++++++++++++++++++
>  drivers/iommu/iommu-sva.c  |   3 -
>  drivers/iommu/iommu.c      |  31 ++---
>  include/linux/iommu.h      |  34 +++++-
>  6 files changed, 339 insertions(+), 22 deletions(-)
>  create mode 100644 drivers/iommu/io-pgfault.c
> 
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index 146eebe9a4bb..e751bb9958ba 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -85,6 +85,15 @@ config IOMMU_SVA
>  
>  	  If unsure, say N here.
>  
> +config IOMMU_FAULT
> +	bool "Fault handler for the IOMMU API"
> +	select IOMMU_API
> +	help
> +	  Enable the generic fault handler for the IOMMU API, that handles
> +	  recoverable page faults or inject them into guests.
> +
> +	  If unsure, say N here.
> +
>  config FSL_PAMU
>  	bool "Freescale IOMMU support"
>  	depends on PCI
> @@ -156,6 +165,7 @@ config INTEL_IOMMU
>  	select IOMMU_API
>  	select IOMMU_IOVA
>  	select DMAR_TABLE
> +	select IOMMU_FAULT
>  	help
>  	  DMA remapping (DMAR) devices support enables independent address
>  	  translations for Direct Memory Access (DMA) from devices.
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index 1dbcc89ebe4c..f4324e29035e 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -4,6 +4,7 @@ obj-$(CONFIG_IOMMU_API) += iommu-traces.o
>  obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o
>  obj-$(CONFIG_IOMMU_DMA) += dma-iommu.o
>  obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o
> +obj-$(CONFIG_IOMMU_FAULT) += io-pgfault.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
> diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
> new file mode 100644
> index 000000000000..33309ed316d2
> --- /dev/null
> +++ b/drivers/iommu/io-pgfault.c
> @@ -0,0 +1,282 @@
> +/*
> + * Handle device page faults
> + *
> + * Copyright (C) 2018 ARM Ltd.
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +
> +#include <linux/iommu.h>
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <linux/workqueue.h>
> +
> +static struct workqueue_struct *iommu_fault_queue;
> +static DECLARE_RWSEM(iommu_fault_queue_sem);
> +static refcount_t iommu_fault_queue_refs = REFCOUNT_INIT(0);
> +static BLOCKING_NOTIFIER_HEAD(iommu_fault_queue_flush_notifiers);
> +
> +/* Used to store incomplete fault groups */
> +static LIST_HEAD(iommu_partial_faults);
> +static DEFINE_SPINLOCK(iommu_partial_faults_lock);
> +
> +struct iommu_fault_context {
> +	struct device			*dev;
> +	struct iommu_fault_event	evt;
> +	struct list_head		head;
> +};
> +
> +struct iommu_fault_group {
> +	struct iommu_domain		*domain;
> +	struct iommu_fault_context	last_fault;
> +	struct list_head		faults;
> +	struct work_struct		work;
> +};
> +
> +/*
> + * iommu_fault_complete() - Finish handling a fault
> + *
> + * Send a response if necessary and pass on the sanitized status code
> + */
> +static int iommu_fault_complete(struct iommu_domain *domain, struct device *dev,
> +				struct iommu_fault_event *evt, int status)
> +{
> +	struct page_response_msg resp = {
> +		.addr		= evt->addr,
> +		.pasid		= evt->pasid,
> +		.pasid_present	= evt->pasid_valid,
> +		.page_req_group_id = evt->page_req_group_id,
Really trivial, but if you want to align the equals signs, the all need indenting
one more tab.

> +		.type		= IOMMU_PAGE_GROUP_RESP,
> +		.private_data	= evt->iommu_private,
> +	};
> +
> +	/*
> +	 * There is no "handling" an unrecoverable fault, so the only valid
> +	 * return values are 0 or an error.
> +	 */
> +	if (evt->type == IOMMU_FAULT_DMA_UNRECOV)
> +		return status > 0 ? 0 : status;
> +
> +	/* Someone took ownership of the fault and will complete it later */
> +	if (status == IOMMU_PAGE_RESP_HANDLED)
> +		return 0;
> +
> +	/*
> +	 * There was an internal error with handling the recoverable fault. Try
> +	 * to complete the fault if possible.
> +	 */
> +	if (status < 0)
> +		status = IOMMU_PAGE_RESP_INVALID;
> +
> +	if (WARN_ON(!domain->ops->page_response))
> +		/*
> +		 * The IOMMU driver shouldn't have submitted recoverable faults
> +		 * if it cannot receive a response.
> +		 */
> +		return -EINVAL;
> +
> +	resp.resp_code = status;
> +	return domain->ops->page_response(domain, dev, &resp);
> +}
> +
> +static int iommu_fault_handle_single(struct iommu_fault_context *fault)
> +{
> +	/* TODO */
> +	return -ENODEV;
> +}
> +
> +static void iommu_fault_handle_group(struct work_struct *work)
> +{
> +	struct iommu_fault_group *group;
> +	struct iommu_fault_context *fault, *next;
> +	int status = IOMMU_PAGE_RESP_SUCCESS;
> +
> +	group = container_of(work, struct iommu_fault_group, work);
> +
> +	list_for_each_entry_safe(fault, next, &group->faults, head) {
> +		struct iommu_fault_event *evt = &fault->evt;
> +		/*
> +		 * Errors are sticky: don't handle subsequent faults in the
> +		 * group if there is an error.
> +		 */
> +		if (status == IOMMU_PAGE_RESP_SUCCESS)
> +			status = iommu_fault_handle_single(fault);
> +
> +		if (!evt->last_req)
> +			kfree(fault);
> +	}
> +
> +	iommu_fault_complete(group->domain, group->last_fault.dev,
> +			     &group->last_fault.evt, status);
> +	kfree(group);
> +}
> +
> +static int iommu_queue_fault(struct iommu_domain *domain, struct device *dev,
> +			     struct iommu_fault_event *evt)
> +{
> +	struct iommu_fault_group *group;
> +	struct iommu_fault_context *fault, *next;
> +
> +	if (!iommu_fault_queue)
> +		return -ENOSYS;
> +
> +	if (!evt->last_req) {
> +		fault = kzalloc(sizeof(*fault), GFP_KERNEL);
> +		if (!fault)
> +			return -ENOMEM;
> +
> +		fault->evt = *evt;
> +		fault->dev = dev;
> +
> +		/* Non-last request of a group. Postpone until the last one */
> +		spin_lock(&iommu_partial_faults_lock);
> +		list_add_tail(&fault->head, &iommu_partial_faults);
> +		spin_unlock(&iommu_partial_faults_lock);
> +
> +		return IOMMU_PAGE_RESP_HANDLED;
> +	}
> +
> +	group = kzalloc(sizeof(*group), GFP_KERNEL);
> +	if (!group)
> +		return -ENOMEM;
> +
> +	group->last_fault.evt = *evt;
> +	group->last_fault.dev = dev;
> +	group->domain = domain;
> +	INIT_LIST_HEAD(&group->faults);
> +	list_add(&group->last_fault.head, &group->faults);
> +	INIT_WORK(&group->work, iommu_fault_handle_group);
> +
> +	/* See if we have pending faults for this group */
> +	spin_lock(&iommu_partial_faults_lock);
> +	list_for_each_entry_safe(fault, next, &iommu_partial_faults, head) {
> +		if (fault->evt.page_req_group_id == evt->page_req_group_id &&
> +		    fault->dev == dev) {
> +			list_del(&fault->head);
> +			/* Insert *before* the last fault */
> +			list_add(&fault->head, &group->faults);
> +		}
> +	}
> +	spin_unlock(&iommu_partial_faults_lock);
> +
> +	queue_work(iommu_fault_queue, &group->work);
> +
> +	/* Postpone the fault completion */
> +	return IOMMU_PAGE_RESP_HANDLED;
> +}
> +
> +/**
> + * iommu_report_device_fault() - Handle fault in device driver or mm
> + *
> + * If the device driver expressed interest in handling fault, report it through
> + * the callback. If the fault is recoverable, try to page in the address.
> + */
> +int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt)
> +{
> +	int ret = -ENOSYS;
> +	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
> +
> +	if (!domain)
> +		return -ENODEV;
> +
> +	/*
> +	 * if upper layers showed interest and installed a fault handler,
> +	 * invoke it.
> +	 */
> +	if (iommu_has_device_fault_handler(dev)) {
> +		struct iommu_fault_param *param = dev->iommu_param->fault_param;
> +
> +		return param->handler(evt, param->data);
> +	}
> +
> +	/* If the handler is blocking, handle fault in the workqueue */
> +	if (evt->type == IOMMU_FAULT_PAGE_REQ)
> +		ret = iommu_queue_fault(domain, dev, evt);
> +
> +	return iommu_fault_complete(domain, dev, evt, ret);
> +}
> +EXPORT_SYMBOL_GPL(iommu_report_device_fault);
> +
> +/**
> + * iommu_fault_queue_register() - register an IOMMU driver to the fault queue
> + * @flush_notifier: a notifier block that is called before the fault queue is
> + * flushed. The IOMMU driver should commit all faults that are pending in its
> + * low-level queues at the time of the call, into the fault queue. The notifier
> + * takes a device pointer as argument, hinting what endpoint is causing the
> + * flush. When the device is NULL, all faults should be committed.
> + */
> +int iommu_fault_queue_register(struct notifier_block *flush_notifier)
> +{
> +	/*
> +	 * The WQ is unordered because the low-level handler enqueues faults by
> +	 * group. PRI requests within a group have to be ordered, but once
> +	 * that's dealt with, the high-level function can handle groups out of
> +	 * order.
> +	 */
> +	down_write(&iommu_fault_queue_sem);
> +	if (!iommu_fault_queue) {
> +		iommu_fault_queue = alloc_workqueue("iommu_fault_queue",
> +						    WQ_UNBOUND, 0);
> +		if (iommu_fault_queue)
> +			refcount_set(&iommu_fault_queue_refs, 1);
> +	} else {
> +		refcount_inc(&iommu_fault_queue_refs);
> +	}
> +	up_write(&iommu_fault_queue_sem);
> +
> +	if (!iommu_fault_queue)
> +		return -ENOMEM;
> +
> +	if (flush_notifier)
> +		blocking_notifier_chain_register(&iommu_fault_queue_flush_notifiers,
> +						 flush_notifier);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(iommu_fault_queue_register);
> +
> +/**
> + * iommu_fault_queue_flush() - Ensure that all queued faults have been
> + * processed.
> + * @dev: the endpoint whose faults need to be flushed. If NULL, flush all
> + *       pending faults.
> + *
> + * Users must call this function when releasing a PASID, to ensure that all
> + * pending faults affecting this PASID have been handled, and won't affect the
> + * address space of a subsequent process that reuses this PASID.
> + */
> +void iommu_fault_queue_flush(struct device *dev)
> +{
> +	blocking_notifier_call_chain(&iommu_fault_queue_flush_notifiers, 0, dev);
> +
> +	down_read(&iommu_fault_queue_sem);
> +	/*
> +	 * Don't flush the partial faults list. All PRGs with the PASID are
> +	 * complete and have been submitted to the queue.
> +	 */
> +	if (iommu_fault_queue)
> +		flush_workqueue(iommu_fault_queue);
> +	up_read(&iommu_fault_queue_sem);
> +}
> +EXPORT_SYMBOL_GPL(iommu_fault_queue_flush);
> +
> +/**
> + * iommu_fault_queue_unregister() - Unregister an IOMMU driver from the fault
> + * queue.
> + * @flush_notifier: same parameter as iommu_fault_queue_register
> + */
> +void iommu_fault_queue_unregister(struct notifier_block *flush_notifier)
> +{
> +	down_write(&iommu_fault_queue_sem);
> +	if (refcount_dec_and_test(&iommu_fault_queue_refs)) {
> +		destroy_workqueue(iommu_fault_queue);
> +		iommu_fault_queue = NULL;
> +	}
> +	up_write(&iommu_fault_queue_sem);
> +
> +	if (flush_notifier)
> +		blocking_notifier_chain_unregister(&iommu_fault_queue_flush_notifiers,
> +						   flush_notifier);
I would expect the ordering in queue_unregister to be the reverse of queue
register (to make it obvious there are no races).

That would put this last block at the start before potentially destroying
the work queue.  If I'm missing something then perhaps a comment to
explain why the ordering is not the obvious one?

> +}
> +EXPORT_SYMBOL_GPL(iommu_fault_queue_unregister);
> diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
> index 4bc2a8c12465..d7b231cd7355 100644
> --- a/drivers/iommu/iommu-sva.c
> +++ b/drivers/iommu/iommu-sva.c
> @@ -102,9 +102,6 @@
>   * the device table and PASID 0 would be available to the allocator.
>   */
>  
> -/* TODO: stub for the fault queue. Remove later. */
> -#define iommu_fault_queue_flush(...)
> -
>  struct iommu_bond {
>  	struct io_mm		*io_mm;
>  	struct device		*dev;
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 1d60b32a6744..c475893ec7dc 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -798,6 +798,17 @@ int iommu_group_unregister_notifier(struct iommu_group *group,
>  }
>  EXPORT_SYMBOL_GPL(iommu_group_unregister_notifier);
>  
> +/**
> + * iommu_register_device_fault_handler() - Register a device fault handler
> + * @dev: the device
> + * @handler: the fault handler
> + * @data: private data passed as argument to the callback
> + *
> + * When an IOMMU fault event is received, call this handler with the fault event
> + * and data as argument.
> + *
> + * Return 0 if the fault handler was installed successfully, or an error.
> + */
>  int iommu_register_device_fault_handler(struct device *dev,
>  					iommu_dev_fault_handler_t handler,
>  					void *data)
> @@ -825,6 +836,13 @@ int iommu_register_device_fault_handler(struct device *dev,
>  }
>  EXPORT_SYMBOL_GPL(iommu_register_device_fault_handler);
>  
> +/**
> + * iommu_unregister_device_fault_handler() - Unregister the device fault handler
> + * @dev: the device
> + *
> + * Remove the device fault handler installed with
> + * iommu_register_device_fault_handler().
> + */
>  int iommu_unregister_device_fault_handler(struct device *dev)
>  {
>  	struct iommu_param *idata = dev->iommu_param;
> @@ -840,19 +858,6 @@ int iommu_unregister_device_fault_handler(struct device *dev)
>  }
>  EXPORT_SYMBOL_GPL(iommu_unregister_device_fault_handler);
>  
> -
> -int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt)
> -{
> -	/* we only report device fault if there is a handler registered */
> -	if (!dev->iommu_param || !dev->iommu_param->fault_param ||
> -		!dev->iommu_param->fault_param->handler)
> -		return -ENOSYS;
> -
> -	return dev->iommu_param->fault_param->handler(evt,
> -						dev->iommu_param->fault_param->data);
> -}
> -EXPORT_SYMBOL_GPL(iommu_report_device_fault);
> -
>  /**
>   * iommu_group_id - Return ID for a group
>   * @group: the group to ID
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 226ab4f3ae0e..65e56f28e0ce 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -205,6 +205,7 @@ struct page_response_msg {
>  	u32 resp_code:4;
>  #define IOMMU_PAGE_RESP_SUCCESS	0
>  #define IOMMU_PAGE_RESP_INVALID	1
> +#define IOMMU_PAGE_RESP_HANDLED	2
>  #define IOMMU_PAGE_RESP_FAILURE	0xF
>  
>  	u32 pasid_present:1;
> @@ -534,7 +535,6 @@ extern int iommu_register_device_fault_handler(struct device *dev,
>  
>  extern int iommu_unregister_device_fault_handler(struct device *dev);
>  
> -extern int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt);
>  extern int iommu_page_response(struct iommu_domain *domain, struct device *dev,
>  			       struct page_response_msg *msg);
>  
> @@ -836,11 +836,6 @@ static inline bool iommu_has_device_fault_handler(struct device *dev)
>  	return false;
>  }
>  
> -static inline int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt)
> -{
> -	return 0;
> -}
> -
>  static inline int iommu_page_response(struct iommu_domain *domain, struct device *dev,
>  				      struct page_response_msg *msg)
>  {
> @@ -1005,4 +1000,31 @@ static inline struct mm_struct *iommu_sva_find(int pasid)
>  }
>  #endif /* CONFIG_IOMMU_SVA */
>  
> +#ifdef CONFIG_IOMMU_FAULT
> +extern int iommu_fault_queue_register(struct notifier_block *flush_notifier);
> +extern void iommu_fault_queue_flush(struct device *dev);
> +extern void iommu_fault_queue_unregister(struct notifier_block *flush_notifier);
> +extern int iommu_report_device_fault(struct device *dev,
> +				     struct iommu_fault_event *evt);
> +#else /* CONFIG_IOMMU_FAULT */
> +static inline int iommu_fault_queue_register(struct notifier_block *flush_notifier)
> +{
> +	return -ENODEV;
> +}
> +
> +static inline void iommu_fault_queue_flush(struct device *dev)
> +{
> +}
> +
> +static inline void iommu_fault_queue_unregister(struct notifier_block *flush_notifier)
> +{
> +}
> +
> +static inline int iommu_report_device_fault(struct device *dev,
> +					    struct iommu_fault_event *evt)
> +{
> +	return 0;
> +}
> +#endif /* CONFIG_IOMMU_FAULT */
> +
>  #endif /* __LINUX_IOMMU_H */

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jonathan Cameron March 8, 2018, 4:17 p.m. UTC | #41
On Mon, 12 Feb 2018 18:33:46 +0000
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:

> PCIe devices can implement their own TLB, named Address Translation Cache
> (ATC). Enable Address Translation Service (ATS) for devices that support
> it and send them invalidation requests whenever we invalidate the IOTLBs.
> 
>   Range calculation
>   -----------------
> 
> The invalidation packet itself is a bit awkward: range must be naturally
> aligned, which means that the start address is a multiple of the range
> size. In addition, the size must be a power of two number of 4k pages. We
> have a few options to enforce this constraint:
> 
> (1) Find the smallest naturally aligned region that covers the requested
>     range. This is simple to compute and only takes one ATC_INV, but it
>     will spill on lots of neighbouring ATC entries.
> 
> (2) Align the start address to the region size (rounded up to a power of
>     two), and send a second invalidation for the next range of the same
>     size. Still not great, but reduces spilling.
> 
> (3) Cover the range exactly with the smallest number of naturally aligned
>     regions. This would be interesting to implement but as for (2),
>     requires multiple ATC_INV.
> 
> As I suspect ATC invalidation packets will be a very scarce resource, I'll
> go with option (1) for now, and only send one big invalidation. We can
> move to (2), which is both easier to read and more gentle with the ATC,
> once we've observed on real systems that we can send multiple smaller
> Invalidation Requests for roughly the same price as a single big one.
> 
> Note that with io-pgtable, the unmap function is called for each page, so
> this doesn't matter. The problem shows up when sharing page tables with
> the MMU.
> 
>   Timeout
>   -------
> 
> ATC invalidation is allowed to take up to 90 seconds, according to the
> PCIe spec, so it is possible to hit the SMMU command queue timeout during
> normal operations.
> 
> Some SMMU implementations will raise a CERROR_ATC_INV_SYNC when a CMD_SYNC
> fails because of an ATC invalidation. Some will just abort the CMD_SYNC.
> Others might let CMD_SYNC complete and have an asynchronous IMPDEF
> mechanism to record the error. When we receive a CERROR_ATC_INV_SYNC, we
> could retry sending all ATC_INV since last successful CMD_SYNC. When a
> CMD_SYNC fails without CERROR_ATC_INV_SYNC, we could retry sending *all*
> commands since last successful CMD_SYNC.
> 
> We cannot afford to wait 90 seconds in iommu_unmap, let alone MMU
> notifiers. So we'd have to introduce a more clever system if this timeout
> becomes a problem, like keeping hold of mappings and invalidating in the
> background. Implementing safe delayed invalidations is a very complex
> problem and deserves a series of its own. We'll assess whether more work
> is needed to properly handle ATC invalidation timeouts once this code runs
> on real hardware.
> 
>   Misc
>   ----
> 
> I didn't put ATC and TLB invalidations in the same functions for three
> reasons:
> 
> * TLB invalidation by range is batched and committed with a single sync.
>   Batching ATC invalidation is inconvenient, endpoints limit the number of
>   inflight invalidations. We'd have to count the number of invalidations
>   queued and send a sync periodically. In addition, I suspect we always
>   need a sync between TLB and ATC invalidation for the same page.
> 
> * Doing ATC invalidation outside tlb_inv_range also allows to send less
>   requests, since TLB invalidations are done per page or block, while ATC
>   invalidations target IOVA ranges.
> 
> * TLB invalidation by context is performed when freeing the domain, at
>   which point there isn't any device attached anymore.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Few minor error path related comments inline..

> ---
>  drivers/iommu/arm-smmu-v3.c | 236 ++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 226 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index 8b9f5dd06be0..76513135310f 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -37,6 +37,7 @@
>  #include <linux/of_iommu.h>
>  #include <linux/of_platform.h>
>  #include <linux/pci.h>
> +#include <linux/pci-ats.h>
>  #include <linux/platform_device.h>
>  #include <linux/sched/mm.h>
>  
> @@ -109,6 +110,7 @@
>  #define IDR5_OAS_48_BIT			(5 << IDR5_OAS_SHIFT)
>  
>  #define ARM_SMMU_CR0			0x20
> +#define CR0_ATSCHK			(1 << 4)
>  #define CR0_CMDQEN			(1 << 3)
>  #define CR0_EVTQEN			(1 << 2)
>  #define CR0_PRIQEN			(1 << 1)
> @@ -304,6 +306,7 @@
>  #define CMDQ_ERR_CERROR_NONE_IDX	0
>  #define CMDQ_ERR_CERROR_ILL_IDX		1
>  #define CMDQ_ERR_CERROR_ABT_IDX		2
> +#define CMDQ_ERR_CERROR_ATC_INV_IDX	3
>  
>  #define CMDQ_0_OP_SHIFT			0
>  #define CMDQ_0_OP_MASK			0xffUL
> @@ -327,6 +330,15 @@
>  #define CMDQ_TLBI_1_VA_MASK		~0xfffUL
>  #define CMDQ_TLBI_1_IPA_MASK		0xfffffffff000UL
>  
> +#define CMDQ_ATC_0_SSID_SHIFT		12
> +#define CMDQ_ATC_0_SSID_MASK		0xfffffUL
> +#define CMDQ_ATC_0_SID_SHIFT		32
> +#define CMDQ_ATC_0_SID_MASK		0xffffffffUL
> +#define CMDQ_ATC_0_GLOBAL		(1UL << 9)
> +#define CMDQ_ATC_1_SIZE_SHIFT		0
> +#define CMDQ_ATC_1_SIZE_MASK		0x3fUL
> +#define CMDQ_ATC_1_ADDR_MASK		~0xfffUL
> +
>  #define CMDQ_PRI_0_SSID_SHIFT		12
>  #define CMDQ_PRI_0_SSID_MASK		0xfffffUL
>  #define CMDQ_PRI_0_SID_SHIFT		32
> @@ -425,6 +437,11 @@ module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
>  MODULE_PARM_DESC(disable_bypass,
>  	"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
>  
> +static bool disable_ats_check;
> +module_param_named(disable_ats_check, disable_ats_check, bool, S_IRUGO);
> +MODULE_PARM_DESC(disable_ats_check,
> +	"By default, the SMMU checks whether each incoming transaction marked as translated is allowed by the stream configuration. This option disables the check.");
> +
>  enum pri_resp {
>  	PRI_RESP_DENY,
>  	PRI_RESP_FAIL,
> @@ -498,6 +515,16 @@ struct arm_smmu_cmdq_ent {
>  			u64			addr;
>  		} tlbi;
>  
> +		#define CMDQ_OP_ATC_INV		0x40
> +		#define ATC_INV_SIZE_ALL	52
> +		struct {
> +			u32			sid;
> +			u32			ssid;
> +			u64			addr;
> +			u8			size;
> +			bool			global;
> +		} atc;
> +
>  		#define CMDQ_OP_PRI_RESP	0x41
>  		struct {
>  			u32			sid;
> @@ -928,6 +955,14 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>  	case CMDQ_OP_TLBI_EL2_ASID:
>  		cmd[0] |= (u64)ent->tlbi.asid << CMDQ_TLBI_0_ASID_SHIFT;
>  		break;
> +	case CMDQ_OP_ATC_INV:
> +		cmd[0] |= ent->substream_valid ? CMDQ_0_SSV : 0;
> +		cmd[0] |= ent->atc.global ? CMDQ_ATC_0_GLOBAL : 0;
> +		cmd[0] |= ent->atc.ssid << CMDQ_ATC_0_SSID_SHIFT;
> +		cmd[0] |= (u64)ent->atc.sid << CMDQ_ATC_0_SID_SHIFT;
> +		cmd[1] |= ent->atc.size << CMDQ_ATC_1_SIZE_SHIFT;
> +		cmd[1] |= ent->atc.addr & CMDQ_ATC_1_ADDR_MASK;
> +		break;
>  	case CMDQ_OP_PRI_RESP:
>  		cmd[0] |= ent->substream_valid ? CMDQ_0_SSV : 0;
>  		cmd[0] |= ent->pri.ssid << CMDQ_PRI_0_SSID_SHIFT;
> @@ -984,6 +1019,7 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
>  		[CMDQ_ERR_CERROR_NONE_IDX]	= "No error",
>  		[CMDQ_ERR_CERROR_ILL_IDX]	= "Illegal command",
>  		[CMDQ_ERR_CERROR_ABT_IDX]	= "Abort on command fetch",
> +		[CMDQ_ERR_CERROR_ATC_INV_IDX]	= "ATC invalidate timeout",
>  	};
>  
>  	int i;
> @@ -1003,6 +1039,14 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
>  		dev_err(smmu->dev, "retrying command fetch\n");
>  	case CMDQ_ERR_CERROR_NONE_IDX:
>  		return;
> +	case CMDQ_ERR_CERROR_ATC_INV_IDX:
> +		/*
> +		 * ATC Invalidation Completion timeout. CONS is still pointing
> +		 * at the CMD_SYNC. Attempt to complete other pending commands
> +		 * by repeating the CMD_SYNC, though we might well end up back
> +		 * here since the ATC invalidation may still be pending.
> +		 */
> +		return;
>  	case CMDQ_ERR_CERROR_ILL_IDX:
>  		/* Fallthrough */
>  	default:
> @@ -1261,9 +1305,6 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
>  			 STRTAB_STE_1_S1C_CACHE_WBRA
>  			 << STRTAB_STE_1_S1COR_SHIFT |
>  			 STRTAB_STE_1_S1C_SH_ISH << STRTAB_STE_1_S1CSH_SHIFT |
> -#ifdef CONFIG_PCI_ATS
> -			 STRTAB_STE_1_EATS_TRANS << STRTAB_STE_1_EATS_SHIFT |
> -#endif
>  			 (smmu->features & ARM_SMMU_FEAT_E2H ?
>  			  STRTAB_STE_1_STRW_EL2 : STRTAB_STE_1_STRW_NSEL1) <<
>  			 STRTAB_STE_1_STRW_SHIFT);
> @@ -1300,6 +1341,10 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
>  		val |= STRTAB_STE_0_CFG_S2_TRANS;
>  	}
>  
> +	if (IS_ENABLED(CONFIG_PCI_ATS))
> +		dst[1] |= cpu_to_le64(STRTAB_STE_1_EATS_TRANS
> +				      << STRTAB_STE_1_EATS_SHIFT);
> +
>  	arm_smmu_sync_ste_for_sid(smmu, sid);
>  	dst[0] = cpu_to_le64(val);
>  	arm_smmu_sync_ste_for_sid(smmu, sid);
> @@ -1680,6 +1725,104 @@ static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
>  	return IRQ_WAKE_THREAD;
>  }
>  
> +/* ATS invalidation */
> +static bool arm_smmu_master_has_ats(struct arm_smmu_master_data *master)
> +{
> +	return dev_is_pci(master->dev) && to_pci_dev(master->dev)->ats_enabled;
> +}
> +
> +static void
> +arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
> +			struct arm_smmu_cmdq_ent *cmd)
> +{
> +	size_t log2_span;
> +	size_t span_mask;
> +	/* ATC invalidates are always on 4096 bytes pages */
> +	size_t inval_grain_shift = 12;
> +	unsigned long page_start, page_end;
> +
> +	*cmd = (struct arm_smmu_cmdq_ent) {
> +		.opcode			= CMDQ_OP_ATC_INV,
> +		.substream_valid	= !!ssid,
> +		.atc.ssid		= ssid,
> +	};
> +
> +	if (!size) {
> +		cmd->atc.size = ATC_INV_SIZE_ALL;
> +		return;
> +	}
> +
> +	page_start	= iova >> inval_grain_shift;
> +	page_end	= (iova + size - 1) >> inval_grain_shift;
> +
> +	/*
> +	 * Find the smallest power of two that covers the range. Most
> +	 * significant differing bit between start and end address indicates the
> +	 * required span, ie. fls(start ^ end). For example:
> +	 *
> +	 * We want to invalidate pages [8; 11]. This is already the ideal range:
> +	 *		x = 0b1000 ^ 0b1011 = 0b11
> +	 *		span = 1 << fls(x) = 4
> +	 *
> +	 * To invalidate pages [7; 10], we need to invalidate [0; 15]:
> +	 *		x = 0b0111 ^ 0b1010 = 0b1101
> +	 *		span = 1 << fls(x) = 16
> +	 */
> +	log2_span	= fls_long(page_start ^ page_end);
> +	span_mask	= (1ULL << log2_span) - 1;
> +
> +	page_start	&= ~span_mask;
> +
> +	cmd->atc.addr	= page_start << inval_grain_shift;
> +	cmd->atc.size	= log2_span;
> +}
> +
> +static int arm_smmu_atc_inv_master(struct arm_smmu_master_data *master,
> +				   struct arm_smmu_cmdq_ent *cmd)
> +{
> +	int i;
> +	struct iommu_fwspec *fwspec = master->dev->iommu_fwspec;
> +
> +	if (!arm_smmu_master_has_ats(master))
> +		return 0;
> +
> +	for (i = 0; i < fwspec->num_ids; i++) {
> +		cmd->atc.sid = fwspec->ids[i];
> +		arm_smmu_cmdq_issue_cmd(master->smmu, cmd);
> +	}
> +
> +	arm_smmu_cmdq_issue_sync(master->smmu);
> +
> +	return 0;
> +}
> +
> +static int arm_smmu_atc_inv_master_all(struct arm_smmu_master_data *master,
> +				       int ssid)
> +{
> +	struct arm_smmu_cmdq_ent cmd;
> +
> +	arm_smmu_atc_inv_to_cmd(ssid, 0, 0, &cmd);
> +	return arm_smmu_atc_inv_master(master, &cmd);
> +}
> +
> +static size_t
> +arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, int ssid,
> +			unsigned long iova, size_t size)
> +{
> +	unsigned long flags;
> +	struct arm_smmu_cmdq_ent cmd;
> +	struct arm_smmu_master_data *master;
> +
> +	arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
> +
> +	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> +	list_for_each_entry(master, &smmu_domain->devices, list)
> +		arm_smmu_atc_inv_master(master, &cmd);
> +	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> +
> +	return size;
> +}
> +
>  /* IO_PGTABLE API */
>  static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
>  {
> @@ -2092,6 +2235,8 @@ static void arm_smmu_detach_dev(struct device *dev)
>  	if (smmu_domain) {
>  		__iommu_sva_unbind_dev_all(dev);
>  
> +		arm_smmu_atc_inv_master_all(master, 0);
> +
>  		spin_lock_irqsave(&smmu_domain->devices_lock, flags);
>  		list_del(&master->list);
>  		spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> @@ -2179,12 +2324,19 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
>  static size_t
>  arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
>  {
> -	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
> +	int ret;
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
>  
>  	if (!ops)
>  		return 0;
>  
> -	return ops->unmap(ops, iova, size);
> +	ret = ops->unmap(ops, iova, size);
> +
> +	if (ret && smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS)
> +		ret = arm_smmu_atc_inv_domain(smmu_domain, 0, iova, size);
> +
> +	return ret;
>  }
>  
>  static void arm_smmu_iotlb_sync(struct iommu_domain *domain)
> @@ -2342,6 +2494,48 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>  	return sid < limit;
>  }
>  
> +static int arm_smmu_enable_ats(struct arm_smmu_master_data *master)
> +{
> +	int ret;
> +	size_t stu;
> +	struct pci_dev *pdev;
> +	struct arm_smmu_device *smmu = master->smmu;
> +	struct iommu_fwspec *fwspec = master->dev->iommu_fwspec;
> +
> +	if (!(smmu->features & ARM_SMMU_FEAT_ATS) || !dev_is_pci(master->dev) ||
> +	    (fwspec->flags & IOMMU_FWSPEC_PCI_NO_ATS))
> +		return -ENOSYS;
> +
> +	pdev = to_pci_dev(master->dev);
> +
> +	/* Smallest Translation Unit: log2 of the smallest supported granule */
> +	stu = __ffs(smmu->pgsize_bitmap);
> +
> +	ret = pci_enable_ats(pdev, stu);
> +	if (ret)
> +		return ret;
> +
> +	dev_dbg(&pdev->dev, "enabled ATS (STU=%zu, QDEP=%d)\n", stu,
> +		pci_ats_queue_depth(pdev));
> +
> +	return 0;
> +}
> +
> +static void arm_smmu_disable_ats(struct arm_smmu_master_data *master)
> +{
> +	struct pci_dev *pdev;
> +
> +	if (!dev_is_pci(master->dev))
> +		return;
> +
> +	pdev = to_pci_dev(master->dev);
> +
> +	if (!pdev->ats_enabled)
> +		return;
> +
> +	pci_disable_ats(pdev);
> +}
> +
>  static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
>  				  struct arm_smmu_master_data *master)
>  {
> @@ -2462,14 +2656,24 @@ static int arm_smmu_add_device(struct device *dev)
>  		master->ste.can_stall = true;
>  	}
>  
> +	arm_smmu_enable_ats(master);
It's a bit nasty not to handle the errors that this could output (other than
the ENOSYS for when it's not available). Seems that it would be nice to at
least add a note to the log if people are expecting it to work and it won't
because some condition or other isn't met.

> +
>  	group = iommu_group_get_for_dev(dev);
> -	if (!IS_ERR(group)) {
> -		arm_smmu_insert_master(smmu, master);
> -		iommu_group_put(group);
> -		iommu_device_link(&smmu->iommu, dev);
> +	if (IS_ERR(group)) {
> +		ret = PTR_ERR(group);
> +		goto err_disable_ats;
>  	}
>  
> -	return PTR_ERR_OR_ZERO(group);
> +	iommu_group_put(group);
> +	arm_smmu_insert_master(smmu, master);
> +	iommu_device_link(&smmu->iommu, dev);
> +
> +	return 0;
> +
> +err_disable_ats:
> +	arm_smmu_disable_ats(master);
master is leaked here I think...
Possibly other things as this doesn't line up with the
remove which I'd have mostly expected it to do.

There are some slightly fishy bits of ordering in the original code
anyway that I'm not seeing justification for (why is
the iommu_device_unlink later than one might expect for
example).

> +
> +	return ret;
>  }
>  
>  static void arm_smmu_remove_device(struct device *dev)
> @@ -2486,6 +2690,8 @@ static void arm_smmu_remove_device(struct device *dev)
>  	if (master && master->ste.assigned)
>  		arm_smmu_detach_dev(dev);
>  	arm_smmu_remove_master(smmu, master);
> +	arm_smmu_disable_ats(master);
> +
>  	iommu_group_remove_device(dev);
>  	iommu_device_unlink(&smmu->iommu, dev);
>  	kfree(master);
> @@ -3094,6 +3300,16 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
>  		}
>  	}
>  
> +	if (smmu->features & ARM_SMMU_FEAT_ATS && !disable_ats_check) {
> +		enables |= CR0_ATSCHK;
> +		ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
> +					      ARM_SMMU_CR0ACK);
> +		if (ret) {
> +			dev_err(smmu->dev, "failed to enable ATS check\n");
> +			return ret;
> +		}
> +	}
> +
>  	ret = arm_smmu_setup_irqs(smmu);
>  	if (ret) {
>  		dev_err(smmu->dev, "failed to setup irqs\n");

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jonathan Cameron March 8, 2018, 4:24 p.m. UTC | #42
On Mon, 12 Feb 2018 18:33:50 +0000
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:

> For PCI devices that support it, enable the PRI capability and handle
> PRI Page Requests with the generic fault handler.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
A couple of nitpicks.

> ---
>  drivers/iommu/arm-smmu-v3.c | 174 ++++++++++++++++++++++++++++++--------------
>  1 file changed, 119 insertions(+), 55 deletions(-)
> 
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index 8d09615fab35..ace2f995b0c0 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -271,6 +271,7 @@
>  #define STRTAB_STE_1_S1COR_SHIFT	4
>  #define STRTAB_STE_1_S1CSH_SHIFT	6
>  
> +#define STRTAB_STE_1_PPAR		(1UL << 18)
>  #define STRTAB_STE_1_S1STALLD		(1UL << 27)
>  
>  #define STRTAB_STE_1_EATS_ABT		0UL
> @@ -346,9 +347,9 @@
>  #define CMDQ_PRI_1_GRPID_SHIFT		0
>  #define CMDQ_PRI_1_GRPID_MASK		0x1ffUL
>  #define CMDQ_PRI_1_RESP_SHIFT		12
> -#define CMDQ_PRI_1_RESP_DENY		(0UL << CMDQ_PRI_1_RESP_SHIFT)
> -#define CMDQ_PRI_1_RESP_FAIL		(1UL << CMDQ_PRI_1_RESP_SHIFT)
> -#define CMDQ_PRI_1_RESP_SUCC		(2UL << CMDQ_PRI_1_RESP_SHIFT)
> +#define CMDQ_PRI_1_RESP_FAILURE		(0UL << CMDQ_PRI_1_RESP_SHIFT)
> +#define CMDQ_PRI_1_RESP_INVALID		(1UL << CMDQ_PRI_1_RESP_SHIFT)
> +#define CMDQ_PRI_1_RESP_SUCCESS		(2UL << CMDQ_PRI_1_RESP_SHIFT)
Mixing fixing up this naming with the rest of the patch does make things a
little harder to read than they would have been if done as separate patches.
Worth splitting?

>  
>  #define CMDQ_RESUME_0_SID_SHIFT		32
>  #define CMDQ_RESUME_0_SID_MASK		0xffffffffUL
> @@ -442,12 +443,6 @@ module_param_named(disable_ats_check, disable_ats_check, bool, S_IRUGO);
>  MODULE_PARM_DESC(disable_ats_check,
>  	"By default, the SMMU checks whether each incoming transaction marked as translated is allowed by the stream configuration. This option disables the check.");
>  
> -enum pri_resp {
> -	PRI_RESP_DENY,
> -	PRI_RESP_FAIL,
> -	PRI_RESP_SUCC,
> -};
> -
>  enum arm_smmu_msi_index {
>  	EVTQ_MSI_INDEX,
>  	GERROR_MSI_INDEX,
> @@ -530,7 +525,7 @@ struct arm_smmu_cmdq_ent {
>  			u32			sid;
>  			u32			ssid;
>  			u16			grpid;
> -			enum pri_resp		resp;
> +			enum page_response_code	resp;
>  		} pri;
>  
>  		#define CMDQ_OP_RESUME		0x44
> @@ -615,6 +610,7 @@ struct arm_smmu_strtab_ent {
>  	struct arm_smmu_s2_cfg		*s2_cfg;
>  
>  	bool				can_stall;
> +	bool				prg_resp_needs_ssid;
>  };
>  
>  struct arm_smmu_strtab_cfg {
> @@ -969,14 +965,14 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>  		cmd[0] |= (u64)ent->pri.sid << CMDQ_PRI_0_SID_SHIFT;
>  		cmd[1] |= ent->pri.grpid << CMDQ_PRI_1_GRPID_SHIFT;
>  		switch (ent->pri.resp) {
> -		case PRI_RESP_DENY:
> -			cmd[1] |= CMDQ_PRI_1_RESP_DENY;
> +		case IOMMU_PAGE_RESP_FAILURE:
> +			cmd[1] |= CMDQ_PRI_1_RESP_FAILURE;
>  			break;
> -		case PRI_RESP_FAIL:
> -			cmd[1] |= CMDQ_PRI_1_RESP_FAIL;
> +		case IOMMU_PAGE_RESP_INVALID:
> +			cmd[1] |= CMDQ_PRI_1_RESP_INVALID;
>  			break;
> -		case PRI_RESP_SUCC:
> -			cmd[1] |= CMDQ_PRI_1_RESP_SUCC;
> +		case IOMMU_PAGE_RESP_SUCCESS:
> +			cmd[1] |= CMDQ_PRI_1_RESP_SUCCESS;
>  			break;
>  		default:
>  			return -EINVAL;
> @@ -1180,9 +1176,16 @@ static int arm_smmu_page_response(struct iommu_domain *domain,
>  		cmd.resume.sid		= sid;
>  		cmd.resume.stag		= resp->page_req_group_id;
>  		cmd.resume.resp		= resp->resp_code;
> +	} else if (master->can_fault) {
> +		cmd.opcode		= CMDQ_OP_PRI_RESP;
> +		cmd.substream_valid	= resp->pasid_present &&
> +					  master->ste.prg_resp_needs_ssid;
> +		cmd.pri.sid		= sid;
> +		cmd.pri.ssid		= resp->pasid;
> +		cmd.pri.grpid		= resp->page_req_group_id;
> +		cmd.pri.resp		= resp->resp_code;
>  	} else {
> -		/* TODO: put PRI response here */
> -		return -EINVAL;
> +		return -ENODEV;
>  	}
>  
>  	arm_smmu_cmdq_issue_cmd(master->smmu, &cmd);
> @@ -1309,6 +1312,9 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
>  			  STRTAB_STE_1_STRW_EL2 : STRTAB_STE_1_STRW_NSEL1) <<
>  			 STRTAB_STE_1_STRW_SHIFT);
>  
> +		if (ste->prg_resp_needs_ssid)
> +			dst[1] |= STRTAB_STE_1_PPAR;
> +
>  		if (smmu->features & ARM_SMMU_FEAT_STALLS &&
>  		   !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE) &&
>  		   !ste->can_stall)
> @@ -1536,40 +1542,32 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
>  
>  static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
>  {
> -	u32 sid, ssid;
> -	u16 grpid;
> -	bool ssv, last;
> -
> -	sid = evt[0] >> PRIQ_0_SID_SHIFT & PRIQ_0_SID_MASK;
> -	ssv = evt[0] & PRIQ_0_SSID_V;
> -	ssid = ssv ? evt[0] >> PRIQ_0_SSID_SHIFT & PRIQ_0_SSID_MASK : 0;
> -	last = evt[0] & PRIQ_0_PRG_LAST;
> -	grpid = evt[1] >> PRIQ_1_PRG_IDX_SHIFT & PRIQ_1_PRG_IDX_MASK;
> -
> -	dev_info(smmu->dev, "unexpected PRI request received:\n");
> -	dev_info(smmu->dev,
> -		 "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
> -		 sid, ssid, grpid, last ? "L" : "",
> -		 evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
> -		 evt[0] & PRIQ_0_PERM_READ ? "R" : "",
> -		 evt[0] & PRIQ_0_PERM_WRITE ? "W" : "",
> -		 evt[0] & PRIQ_0_PERM_EXEC ? "X" : "",
> -		 evt[1] & PRIQ_1_ADDR_MASK << PRIQ_1_ADDR_SHIFT);
> -
> -	if (last) {
> -		struct arm_smmu_cmdq_ent cmd = {
> -			.opcode			= CMDQ_OP_PRI_RESP,
> -			.substream_valid	= ssv,
> -			.pri			= {
> -				.sid	= sid,
> -				.ssid	= ssid,
> -				.grpid	= grpid,
> -				.resp	= PRI_RESP_DENY,
> -			},
> -		};
> +	u32 sid = evt[0] >> PRIQ_0_SID_SHIFT & PRIQ_0_SID_MASK;
>  
> -		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> -	}
> +	struct arm_smmu_master_data *master;
> +	struct iommu_fault_event fault = {
> +		.type		= IOMMU_FAULT_PAGE_REQ,
> +		.last_req	= !!(evt[0] & PRIQ_0_PRG_LAST),
> +		.pasid_valid	= !!(evt[0] & PRIQ_0_SSID_V),
> +		.pasid		= evt[0] >> PRIQ_0_SSID_SHIFT & PRIQ_0_SSID_MASK,
> +		.page_req_group_id = evt[1] >> PRIQ_1_PRG_IDX_SHIFT & PRIQ_1_PRG_IDX_MASK,
> +		.addr		= evt[1] & PRIQ_1_ADDR_MASK << PRIQ_1_ADDR_SHIFT,
> +	};
> +
> +	if (evt[0] & PRIQ_0_PERM_READ)
> +		fault.prot |= IOMMU_FAULT_READ;
> +	if (evt[0] & PRIQ_0_PERM_WRITE)
> +		fault.prot |= IOMMU_FAULT_WRITE;
> +	if (evt[0] & PRIQ_0_PERM_EXEC)
> +		fault.prot |= IOMMU_FAULT_EXEC;
> +	if (evt[0] & PRIQ_0_PERM_PRIV)
> +		fault.prot |= IOMMU_FAULT_PRIV;
> +
> +	master = arm_smmu_find_master(smmu, sid);
> +	if (WARN_ON(!master))
> +		return;
> +
> +	iommu_report_device_fault(master->dev, &fault);
>  }
>  
>  static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
> @@ -1594,6 +1592,11 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
>  		}
>  
>  		if (queue_sync_prod(q) == -EOVERFLOW)
> +			/*
> +			 * TODO: flush pending faults, since the SMMU might have
> +			 * auto-responded to the Last request of a pending
> +			 * group
> +			 */
>  			dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n");
>  	} while (!queue_empty(q));
>  
> @@ -1647,7 +1650,8 @@ static int arm_smmu_flush_queues(struct notifier_block *nb,
>  	if (master) {
>  		if (master->ste.can_stall)
>  			arm_smmu_flush_queue(smmu, &smmu->evtq.q, "evtq");
> -		/* TODO: add support for PRI */
> +		else if (master->can_fault)
> +			arm_smmu_flush_queue(smmu, &smmu->priq.q, "priq");
>  		return 0;
>  	}
>  
> @@ -2533,6 +2537,46 @@ static int arm_smmu_enable_ats(struct arm_smmu_master_data *master)
>  	return 0;
>  }
>  
> +static int arm_smmu_enable_pri(struct arm_smmu_master_data *master)
> +{
> +	int ret, pos;
> +	struct pci_dev *pdev;
> +	/*
> +	 * TODO: find a good inflight PPR number. We should divide the PRI queue
> +	 * by the number of PRI-capable devices, but it's impossible to know
> +	 * about current and future (hotplugged) devices. So we're at risk of
> +	 * dropping PPRs (and leaking pending requests in the FQ).
> +	 */
> +	size_t max_inflight_pprs = 16;
> +	struct arm_smmu_device *smmu = master->smmu;
> +
> +	if (!(smmu->features & ARM_SMMU_FEAT_PRI) || !dev_is_pci(master->dev))
> +		return -ENOSYS;
> +
> +	pdev = to_pci_dev(master->dev);
> +
> +	pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI);
> +	if (!pos)
> +		return -ENOSYS;
> +
> +	ret = pci_reset_pri(pdev);
> +	if (ret)
> +		return ret;
> +
> +	ret = pci_enable_pri(pdev, max_inflight_pprs);
> +	if (ret) {
> +		dev_err(master->dev, "cannot enable PRI: %d\n", ret);
> +		return ret;
> +	}
> +
> +	master->can_fault = true;
> +	master->ste.prg_resp_needs_ssid = pci_prg_resp_requires_prefix(pdev);
> +
> +	dev_dbg(master->dev, "enabled PRI");
> +
> +	return 0;
> +}
> +

The function ordering gets a bit random as you add all the new ones,
Might be better to keep each disable following each enable.

>  static void arm_smmu_disable_ats(struct arm_smmu_master_data *master)
>  {
>  	struct pci_dev *pdev;
> @@ -2548,6 +2592,22 @@ static void arm_smmu_disable_ats(struct arm_smmu_master_data *master)
>  	pci_disable_ats(pdev);
>  }
>  
> +static void arm_smmu_disable_pri(struct arm_smmu_master_data *master)
> +{
> +	struct pci_dev *pdev;
> +
> +	if (!dev_is_pci(master->dev))
> +		return;
> +
> +	pdev = to_pci_dev(master->dev);
> +
> +	if (!pdev->pri_enabled)
> +		return;
> +
> +	pci_disable_pri(pdev);
> +	master->can_fault = false;
> +}
> +
>  static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
>  				  struct arm_smmu_master_data *master)
>  {
> @@ -2668,12 +2728,13 @@ static int arm_smmu_add_device(struct device *dev)
>  		master->ste.can_stall = true;
>  	}
>  
> -	arm_smmu_enable_ats(master);
> +	if (!arm_smmu_enable_ats(master))
> +		arm_smmu_enable_pri(master);
>  
>  	group = iommu_group_get_for_dev(dev);
>  	if (IS_ERR(group)) {
>  		ret = PTR_ERR(group);
> -		goto err_disable_ats;
> +		goto err_disable_pri;
>  	}
>  
>  	iommu_group_put(group);
> @@ -2682,7 +2743,8 @@ static int arm_smmu_add_device(struct device *dev)
>  
>  	return 0;
>  
> -err_disable_ats:
> +err_disable_pri:
> +	arm_smmu_disable_pri(master);
>  	arm_smmu_disable_ats(master);
>  
>  	return ret;
> @@ -2702,6 +2764,8 @@ static void arm_smmu_remove_device(struct device *dev)
>  	if (master && master->ste.assigned)
>  		arm_smmu_detach_dev(dev);
>  	arm_smmu_remove_master(smmu, master);
> +
> +	arm_smmu_disable_pri(master);
>  	arm_smmu_disable_ats(master);
>  
>  	iommu_group_remove_device(dev);

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jonathan Cameron March 8, 2018, 5:34 p.m. UTC | #43
On Mon, 12 Feb 2018 18:33:43 +0000
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:

> When handling faults from the event or PRI queue, we need to find the
> struct device associated to a SID. Add a rb_tree to keep track of SIDs.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
nipick inline.


> ---
>  drivers/iommu/arm-smmu-v3.c | 105 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 105 insertions(+)
> 
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index c5b3a43becaf..2430b2140f8d 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -615,10 +615,19 @@ struct arm_smmu_device {
>  	/* IOMMU core code handle */
>  	struct iommu_device		iommu;
>  
> +	struct rb_root			streams;
> +	struct mutex			streams_mutex;
> +
>  	/* Notifier for the fault queue */
>  	struct notifier_block		faultq_nb;
>  };
>  
> +struct arm_smmu_stream {
> +	u32				id;
> +	struct arm_smmu_master_data	*master;
> +	struct rb_node			node;
> +};
> +
>  /* SMMU private data for each master */
>  struct arm_smmu_master_data {
>  	struct arm_smmu_device		*smmu;
> @@ -626,6 +635,7 @@ struct arm_smmu_master_data {
>  
>  	struct arm_smmu_domain		*domain;
>  	struct list_head		list; /* domain->devices */
> +	struct arm_smmu_stream		*streams;
>  
>  	struct device			*dev;
>  
> @@ -1250,6 +1260,31 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
>  	return 0;
>  }
>  
> +static struct arm_smmu_master_data *
> +arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid)
> +{
> +	struct rb_node *node;
> +	struct arm_smmu_stream *stream;
> +	struct arm_smmu_master_data *master = NULL;
> +
> +	mutex_lock(&smmu->streams_mutex);
> +	node = smmu->streams.rb_node;
> +	while (node) {
> +		stream = rb_entry(node, struct arm_smmu_stream, node);
> +		if (stream->id < sid) {
> +			node = node->rb_right;
> +		} else if (stream->id > sid) {
> +			node = node->rb_left;
> +		} else {
> +			master = stream->master;
> +			break;
> +		}
> +	}
> +	mutex_unlock(&smmu->streams_mutex);
> +
> +	return master;
> +}
> +
>  /* IRQ and event handlers */
>  static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
>  {
> @@ -2146,6 +2181,71 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>  	return sid < limit;
>  }
>  
> +static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
> +				  struct arm_smmu_master_data *master)
> +{
> +	int i;
> +	int ret = 0;
> +	struct arm_smmu_stream *new_stream, *cur_stream;
> +	struct rb_node **new_node, *parent_node = NULL;
> +	struct iommu_fwspec *fwspec = master->dev->iommu_fwspec;
> +
> +	master->streams = kcalloc(fwspec->num_ids,
> +				  sizeof(struct arm_smmu_stream), GFP_KERNEL);
> +	if (!master->streams)
> +		return -ENOMEM;
> +
> +	mutex_lock(&smmu->streams_mutex);
> +	for (i = 0; i < fwspec->num_ids && !ret; i++) {
> +		new_stream = &master->streams[i];
> +		new_stream->id = fwspec->ids[i];
> +		new_stream->master = master;
> +
> +		new_node = &(smmu->streams.rb_node);
> +		while (*new_node) {
> +			cur_stream = rb_entry(*new_node, struct arm_smmu_stream,
> +					      node);
> +			parent_node = *new_node;
> +			if (cur_stream->id > new_stream->id) {
> +				new_node = &((*new_node)->rb_left);
> +			} else if (cur_stream->id < new_stream->id) {
> +				new_node = &((*new_node)->rb_right);
> +			} else {
> +				dev_warn(master->dev,
> +					 "stream %u already in tree\n",
> +					 cur_stream->id);
> +				ret = -EINVAL;
> +				break;
> +			}
> +		}
> +
> +		if (!ret) {
> +			rb_link_node(&new_stream->node, parent_node, new_node);
> +			rb_insert_color(&new_stream->node, &smmu->streams);
> +		}
> +	}
> +	mutex_unlock(&smmu->streams_mutex);
> +
> +	return ret;
> +}
> +
> +static void arm_smmu_remove_master(struct arm_smmu_device *smmu,
> +				   struct arm_smmu_master_data *master)
> +{
> +	int i;
> +	struct iommu_fwspec *fwspec = master->dev->iommu_fwspec;
> +
> +	if (!master->streams)
> +		return;
> +
> +	mutex_lock(&smmu->streams_mutex);
> +	for (i = 0; i < fwspec->num_ids; i++)
> +		rb_erase(&master->streams[i].node, &smmu->streams);
> +	mutex_unlock(&smmu->streams_mutex);
> +
> +	kfree(master->streams);
> +}
> +
>  static struct iommu_ops arm_smmu_ops;
>  
>  static int arm_smmu_add_device(struct device *dev)
> @@ -2198,6 +2298,7 @@ static int arm_smmu_add_device(struct device *dev)
>  
>  	group = iommu_group_get_for_dev(dev);
>  	if (!IS_ERR(group)) {
> +		arm_smmu_insert_master(smmu, master);
There are some error cases it would be good to take notice off when
inserting the master.  Admittedly the same is true of iommu_device_link
so I guess you are keeping with the existing code style.

Would also be nice if the later bit of rework to drop these out
of the if statement was done before this patch in the series.


>  		iommu_group_put(group);
>  		iommu_device_link(&smmu->iommu, dev);
>  	}
> @@ -2218,6 +2319,7 @@ static void arm_smmu_remove_device(struct device *dev)
>  	smmu = master->smmu;
>  	if (master && master->ste.assigned)
>  		arm_smmu_detach_dev(dev);
> +	arm_smmu_remove_master(smmu, master);
>  	iommu_group_remove_device(dev);
>  	iommu_device_unlink(&smmu->iommu, dev);
>  	kfree(master);
> @@ -2527,6 +2629,9 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu)
>  	int ret;
>  
>  	atomic_set(&smmu->sync_nr, 0);
> +	mutex_init(&smmu->streams_mutex);
> +	smmu->streams = RB_ROOT;
> +
>  	ret = arm_smmu_init_queues(smmu);
>  	if (ret)
>  		return ret;

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jonathan Cameron March 8, 2018, 5:44 p.m. UTC | #44
On Mon, 12 Feb 2018 18:33:42 +0000
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:

> When using PRI or Stall, the PRI or event handler enqueues faults into the
> core fault queue. Register it based on the SMMU features.
> 
> When the core stops using a PASID, it notifies the SMMU to flush all
> instances of this PASID from the PRI queue. Add a way to flush the PRI and
> event queue. PRI and event thread now take a spinlock while processing the
> queue. The flush handler takes this lock to inspect the queue state.
> We avoid livelock, where the SMMU adds fault to the queue faster than we
> can consume them, by incrementing a 'batch' number on every cycle so the
> flush handler only has to wait a complete cycle (two batch increments.)
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
I think you have a potential incorrect free issue... See inline.

Jonathan
> ---
>  drivers/iommu/Kconfig       |   1 +
>  drivers/iommu/arm-smmu-v3.c | 103 +++++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 103 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index d434f7085dc2..d79c68754bb9 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -354,6 +354,7 @@ config ARM_SMMU_V3
>  	depends on ARM64
>  	select IOMMU_API
>  	select IOMMU_SVA
> +	select IOMMU_FAULT
>  	select IOMMU_IO_PGTABLE_LPAE
>  	select ARM_SMMU_V3_CONTEXT
>  	select GENERIC_MSI_IRQ_DOMAIN
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index 8528704627b5..c5b3a43becaf 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -494,6 +494,10 @@ struct arm_smmu_queue {
>  
>  	u32 __iomem			*prod_reg;
>  	u32 __iomem			*cons_reg;
> +
> +	/* Event and PRI */
> +	u64				batch;
> +	wait_queue_head_t		wq;
>  };
>  
>  struct arm_smmu_cmdq {
> @@ -610,6 +614,9 @@ struct arm_smmu_device {
>  
>  	/* IOMMU core code handle */
>  	struct iommu_device		iommu;
> +
> +	/* Notifier for the fault queue */
> +	struct notifier_block		faultq_nb;
>  };
>  
>  /* SMMU private data for each master */
> @@ -1247,14 +1254,23 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
>  static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
>  {
>  	int i;
> +	int num_handled = 0;
>  	struct arm_smmu_device *smmu = dev;
>  	struct arm_smmu_queue *q = &smmu->evtq.q;
> +	size_t queue_size = 1 << q->max_n_shift;
>  	u64 evt[EVTQ_ENT_DWORDS];
>  
> +	spin_lock(&q->wq.lock);
>  	do {
>  		while (!queue_remove_raw(q, evt)) {
>  			u8 id = evt[0] >> EVTQ_0_ID_SHIFT & EVTQ_0_ID_MASK;
>  
> +			if (++num_handled == queue_size) {
> +				q->batch++;
> +				wake_up_locked(&q->wq);
> +				num_handled = 0;
> +			}
> +
>  			dev_info(smmu->dev, "event 0x%02x received:\n", id);
>  			for (i = 0; i < ARRAY_SIZE(evt); ++i)
>  				dev_info(smmu->dev, "\t0x%016llx\n",
> @@ -1272,6 +1288,11 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
>  
>  	/* Sync our overflow flag, as we believe we're up to speed */
>  	q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons);
> +
> +	q->batch++;
> +	wake_up_locked(&q->wq);
> +	spin_unlock(&q->wq.lock);
> +
>  	return IRQ_HANDLED;
>  }
>  
> @@ -1315,13 +1336,24 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
>  
>  static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
>  {
> +	int num_handled = 0;
>  	struct arm_smmu_device *smmu = dev;
>  	struct arm_smmu_queue *q = &smmu->priq.q;
> +	size_t queue_size = 1 << q->max_n_shift;
>  	u64 evt[PRIQ_ENT_DWORDS];
>  
> +	spin_lock(&q->wq.lock);
>  	do {
> -		while (!queue_remove_raw(q, evt))
> +		while (!queue_remove_raw(q, evt)) {
> +			spin_unlock(&q->wq.lock);
>  			arm_smmu_handle_ppr(smmu, evt);
> +			spin_lock(&q->wq.lock);
> +			if (++num_handled == queue_size) {
> +				q->batch++;
> +				wake_up_locked(&q->wq);
> +				num_handled = 0;
> +			}
> +		}
>  
>  		if (queue_sync_prod(q) == -EOVERFLOW)
>  			dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n");
> @@ -1329,9 +1361,65 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
>  
>  	/* Sync our overflow flag, as we believe we're up to speed */
>  	q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons);
> +
> +	q->batch++;
> +	wake_up_locked(&q->wq);
> +	spin_unlock(&q->wq.lock);
> +
>  	return IRQ_HANDLED;
>  }
>  
> +/*
> + * arm_smmu_flush_queue - wait until all events/PPRs currently in the queue have
> + * been consumed.
> + *
> + * Wait until the queue thread finished a batch, or until the queue is empty.
> + * Note that we don't handle overflows on q->batch. If it occurs, just wait for
> + * the queue to be empty.
> + */
> +static int arm_smmu_flush_queue(struct arm_smmu_device *smmu,
> +				struct arm_smmu_queue *q, const char *name)
> +{
> +	int ret;
> +	u64 batch;
> +
> +	spin_lock(&q->wq.lock);
> +	if (queue_sync_prod(q) == -EOVERFLOW)
> +		dev_err(smmu->dev, "%s overflow detected -- requests lost\n", name);
> +
> +	batch = q->batch;
> +	ret = wait_event_interruptible_locked(q->wq, queue_empty(q) ||
> +					      q->batch >= batch + 2);
> +	spin_unlock(&q->wq.lock);
> +
> +	return ret;
> +}
> +
> +static int arm_smmu_flush_queues(struct notifier_block *nb,
> +				 unsigned long action, void *data)
> +{
> +	struct arm_smmu_device *smmu = container_of(nb, struct arm_smmu_device,
> +						    faultq_nb);
> +	struct device *dev = data;
> +	struct arm_smmu_master_data *master = NULL;
> +
> +	if (dev)
> +		master = dev->iommu_fwspec->iommu_priv;
> +
> +	if (master) {
> +		/* TODO: add support for PRI and Stall */
> +		return 0;
> +	}
> +
> +	/* No target device, flush all queues. */
> +	if (smmu->features & ARM_SMMU_FEAT_STALLS)
> +		arm_smmu_flush_queue(smmu, &smmu->evtq.q, "evtq");
> +	if (smmu->features & ARM_SMMU_FEAT_PRI)
> +		arm_smmu_flush_queue(smmu, &smmu->priq.q, "priq");
> +
> +	return 0;
> +}
> +
>  static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
>  
>  static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
> @@ -2288,6 +2376,10 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>  		     << Q_BASE_LOG2SIZE_SHIFT;
>  
>  	q->prod = q->cons = 0;
> +
> +	init_waitqueue_head(&q->wq);
> +	q->batch = 0;
> +
>  	return 0;
>  }
>  
> @@ -3168,6 +3260,13 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>  	if (ret)
>  		return ret;
>  
> +	if (smmu->features & (ARM_SMMU_FEAT_STALLS | ARM_SMMU_FEAT_PRI)) {
> +		smmu->faultq_nb.notifier_call = arm_smmu_flush_queues;
> +		ret = iommu_fault_queue_register(&smmu->faultq_nb);
Here you register only if this smmu supports stalls or pri which is fine, but
see the unregister path.

> +		if (ret)
> +			return ret;
> +	}
> +
>  	/* And we're up. Go go go! */
>  	ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
>  				     "smmu3.%pa", &ioaddr);
> @@ -3210,6 +3309,8 @@ static int arm_smmu_device_remove(struct platform_device *pdev)
>  {
>  	struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
>  
> +	iommu_fault_queue_unregister(&smmu->faultq_nb);

Here you unregister from the fault queue unconditionally.  That is mostly
safe but it seems to decrement and potentially destroy the work queue that
is in use by another smmu instance that does support page faulting.

> +
>  	arm_smmu_device_disable(smmu);
>  
>  	return 0;

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jonathan Cameron March 9, 2018, 11:44 a.m. UTC | #45
On Mon, 12 Feb 2018 18:33:32 +0000
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:

> In order to add support for substream ID, move the context descriptor code
> into a separate library. At the moment it only manages context descriptor
> 0, which is used for non-PASID translations.
> 
> One important behavior change is the ASID allocator, which is now global
> instead of per-SMMU. If we end up needing per-SMMU ASIDs after all, it
> would be relatively simple to move back to per-device allocator instead
> of a global one. Sharing ASIDs will require an IDR, so implement the
> ASID allocator with an IDA instead of porting the bitmap, to ease the
> transition.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Hi Jean-Philippe,

This would have been easier to review if split into a 'move' and additional
patches actually making the changes described.

Superficially it looks like there may be more going on in here than the
above description suggests.  I'm unsure why we are gaining 
the CFGI_CD_ALL and similar in this patch as there is just to much going on.

Thanks,

Jonathan
> ---
>  MAINTAINERS                         |   2 +-
>  drivers/iommu/Kconfig               |  11 ++
>  drivers/iommu/Makefile              |   1 +
>  drivers/iommu/arm-smmu-v3-context.c | 289 ++++++++++++++++++++++++++++++++++++
>  drivers/iommu/arm-smmu-v3.c         | 265 +++++++++++++++------------------
>  drivers/iommu/iommu-pasid.c         |   1 +
>  drivers/iommu/iommu-pasid.h         |  27 ++++
>  7 files changed, 451 insertions(+), 145 deletions(-)
>  create mode 100644 drivers/iommu/arm-smmu-v3-context.c
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 9cb8ced8322a..93507bfe03a6 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1104,7 +1104,7 @@ R:	Robin Murphy <robin.murphy@arm.com>
>  L:	linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
>  S:	Maintained
>  F:	drivers/iommu/arm-smmu.c
> -F:	drivers/iommu/arm-smmu-v3.c
> +F:	drivers/iommu/arm-smmu-v3*
>  F:	drivers/iommu/io-pgtable-arm.c
>  F:	drivers/iommu/io-pgtable-arm.h
>  F:	drivers/iommu/io-pgtable-arm-v7s.c
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index 8add90ba9b75..4b272925ee78 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -66,6 +66,16 @@ menu "Generic PASID table support"
>  config IOMMU_PASID_TABLE
>  	bool
>  
> +config ARM_SMMU_V3_CONTEXT
> +	bool "ARM SMMU v3 Context Descriptor tables"
> +	select IOMMU_PASID_TABLE
> +	depends on ARM64
> +	help
> +	Enable support for ARM SMMU v3 Context Descriptor tables, used for DMA
> +	and PASID support.
> +
> +	If unsure, say N here.
> +
>  endmenu
>  
>  config IOMMU_IOVA
> @@ -344,6 +354,7 @@ config ARM_SMMU_V3
>  	depends on ARM64
>  	select IOMMU_API
>  	select IOMMU_IO_PGTABLE_LPAE
> +	select ARM_SMMU_V3_CONTEXT
>  	select GENERIC_MSI_IRQ_DOMAIN
>  	help
>  	  Support for implementations of the ARM System MMU architecture
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index 338e59c93131..22758960ed02 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -9,6 +9,7 @@ obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o
>  obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
>  obj-$(CONFIG_IOMMU_PASID_TABLE) += iommu-pasid.o
> +obj-$(CONFIG_ARM_SMMU_V3_CONTEXT) += arm-smmu-v3-context.o
>  obj-$(CONFIG_IOMMU_IOVA) += iova.o
>  obj-$(CONFIG_OF_IOMMU)	+= of_iommu.o
>  obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o
> diff --git a/drivers/iommu/arm-smmu-v3-context.c b/drivers/iommu/arm-smmu-v3-context.c
> new file mode 100644
> index 000000000000..e910cb356f45
> --- /dev/null
> +++ b/drivers/iommu/arm-smmu-v3-context.c
> @@ -0,0 +1,289 @@
> +/*
> + * Context descriptor table driver for SMMUv3
> + *
> + * Copyright (C) 2018 ARM Ltd.
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +
> +#include <linux/device.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/idr.h>
> +#include <linux/kernel.h>
> +#include <linux/slab.h>
> +
> +#include "iommu-pasid.h"
> +
> +#define CTXDESC_CD_DWORDS		8
> +#define CTXDESC_CD_0_TCR_T0SZ_SHIFT	0
> +#define ARM64_TCR_T0SZ_SHIFT		0
> +#define ARM64_TCR_T0SZ_MASK		0x1fUL
> +#define CTXDESC_CD_0_TCR_TG0_SHIFT	6
> +#define ARM64_TCR_TG0_SHIFT		14
> +#define ARM64_TCR_TG0_MASK		0x3UL
> +#define CTXDESC_CD_0_TCR_IRGN0_SHIFT	8
> +#define ARM64_TCR_IRGN0_SHIFT		8
> +#define ARM64_TCR_IRGN0_MASK		0x3UL
> +#define CTXDESC_CD_0_TCR_ORGN0_SHIFT	10
> +#define ARM64_TCR_ORGN0_SHIFT		10
> +#define ARM64_TCR_ORGN0_MASK		0x3UL
> +#define CTXDESC_CD_0_TCR_SH0_SHIFT	12
> +#define ARM64_TCR_SH0_SHIFT		12
> +#define ARM64_TCR_SH0_MASK		0x3UL
> +#define CTXDESC_CD_0_TCR_EPD0_SHIFT	14
> +#define ARM64_TCR_EPD0_SHIFT		7
> +#define ARM64_TCR_EPD0_MASK		0x1UL
> +#define CTXDESC_CD_0_TCR_EPD1_SHIFT	30
> +#define ARM64_TCR_EPD1_SHIFT		23
> +#define ARM64_TCR_EPD1_MASK		0x1UL
> +
> +#define CTXDESC_CD_0_ENDI		(1UL << 15)
> +#define CTXDESC_CD_0_V			(1UL << 31)
> +
> +#define CTXDESC_CD_0_TCR_IPS_SHIFT	32
> +#define ARM64_TCR_IPS_SHIFT		32
> +#define ARM64_TCR_IPS_MASK		0x7UL
> +#define CTXDESC_CD_0_TCR_TBI0_SHIFT	38
> +#define ARM64_TCR_TBI0_SHIFT		37
> +#define ARM64_TCR_TBI0_MASK		0x1UL
> +
> +#define CTXDESC_CD_0_AA64		(1UL << 41)
> +#define CTXDESC_CD_0_S			(1UL << 44)
> +#define CTXDESC_CD_0_R			(1UL << 45)
> +#define CTXDESC_CD_0_A			(1UL << 46)
> +#define CTXDESC_CD_0_ASET_SHIFT		47
> +#define CTXDESC_CD_0_ASET_SHARED	(0UL << CTXDESC_CD_0_ASET_SHIFT)
> +#define CTXDESC_CD_0_ASET_PRIVATE	(1UL << CTXDESC_CD_0_ASET_SHIFT)
> +#define CTXDESC_CD_0_ASID_SHIFT		48
> +#define CTXDESC_CD_0_ASID_MASK		0xffffUL
> +
> +#define CTXDESC_CD_1_TTB0_SHIFT		4
> +#define CTXDESC_CD_1_TTB0_MASK		0xfffffffffffUL
> +
> +#define CTXDESC_CD_3_MAIR_SHIFT		0
> +
> +/* Convert between AArch64 (CPU) TCR format and SMMU CD format */
> +#define ARM_SMMU_TCR2CD(tcr, fld)					\
> +	(((tcr) >> ARM64_TCR_##fld##_SHIFT & ARM64_TCR_##fld##_MASK)	\
> +	 << CTXDESC_CD_0_TCR_##fld##_SHIFT)
> +
> +
> +struct arm_smmu_cd {
> +	struct iommu_pasid_entry	entry;
> +
> +	u64				ttbr;
> +	u64				tcr;
> +	u64				mair;
> +};
> +
> +#define pasid_entry_to_cd(entry) \
> +	container_of((entry), struct arm_smmu_cd, entry)
> +
> +struct arm_smmu_cd_tables {
> +	struct iommu_pasid_table	pasid;
> +
> +	void				*ptr;
> +	dma_addr_t			ptr_dma;
> +};
> +
> +#define pasid_to_cd_tables(pasid_table) \
> +	container_of((pasid_table), struct arm_smmu_cd_tables, pasid)
> +
> +#define pasid_ops_to_tables(ops) \
> +	pasid_to_cd_tables(iommu_pasid_table_ops_to_table(ops))
> +
> +static DEFINE_IDA(asid_ida);
> +
> +static u64 arm_smmu_cpu_tcr_to_cd(u64 tcr)
> +{
> +	u64 val = 0;
> +
> +	/* Repack the TCR. Just care about TTBR0 for now */
> +	val |= ARM_SMMU_TCR2CD(tcr, T0SZ);
> +	val |= ARM_SMMU_TCR2CD(tcr, TG0);
> +	val |= ARM_SMMU_TCR2CD(tcr, IRGN0);
> +	val |= ARM_SMMU_TCR2CD(tcr, ORGN0);
> +	val |= ARM_SMMU_TCR2CD(tcr, SH0);
> +	val |= ARM_SMMU_TCR2CD(tcr, EPD0);
> +	val |= ARM_SMMU_TCR2CD(tcr, EPD1);
> +	val |= ARM_SMMU_TCR2CD(tcr, IPS);
> +	val |= ARM_SMMU_TCR2CD(tcr, TBI0);
> +
> +	return val;
> +}
> +
> +static int arm_smmu_write_ctx_desc(struct arm_smmu_cd_tables *tbl, int ssid,
> +				    struct arm_smmu_cd *cd)
> +{
> +	u64 val;
> +	__u64 *cdptr = tbl->ptr;
> +	struct arm_smmu_context_cfg *cfg = &tbl->pasid.cfg.arm_smmu;
> +
> +	if (!cd || WARN_ON(ssid))
> +		return -EINVAL;
> +
> +	/*
> +	 * We don't need to issue any invalidation here, as we'll invalidate
> +	 * the STE when installing the new entry anyway.
> +	 */
> +	val = arm_smmu_cpu_tcr_to_cd(cd->tcr) |
> +#ifdef __BIG_ENDIAN
> +	      CTXDESC_CD_0_ENDI |
> +#endif
> +	      CTXDESC_CD_0_R | CTXDESC_CD_0_A | CTXDESC_CD_0_ASET_PRIVATE |
> +	      CTXDESC_CD_0_AA64 | cd->entry.tag << CTXDESC_CD_0_ASID_SHIFT |
> +	      CTXDESC_CD_0_V;
> +
> +	if (cfg->stall)
> +		val |= CTXDESC_CD_0_S;
> +
> +	cdptr[0] = cpu_to_le64(val);
> +
> +	val = cd->ttbr & CTXDESC_CD_1_TTB0_MASK << CTXDESC_CD_1_TTB0_SHIFT;
> +	cdptr[1] = cpu_to_le64(val);
> +
> +	cdptr[3] = cpu_to_le64(cd->mair << CTXDESC_CD_3_MAIR_SHIFT);
> +
> +	return 0;
> +}
> +
> +static struct iommu_pasid_entry *
> +arm_smmu_alloc_shared_cd(struct iommu_pasid_table_ops *ops, struct mm_struct *mm)
> +{
> +	return ERR_PTR(-ENODEV);
> +}
> +
> +static struct iommu_pasid_entry *
> +arm_smmu_alloc_priv_cd(struct iommu_pasid_table_ops *ops,
> +		       enum io_pgtable_fmt fmt,
> +		       struct io_pgtable_cfg *cfg)
> +{
> +	int ret;
> +	int asid;
> +	struct arm_smmu_cd *cd;
> +	struct arm_smmu_cd_tables *tbl = pasid_ops_to_tables(ops);
> +	struct arm_smmu_context_cfg *ctx_cfg = &tbl->pasid.cfg.arm_smmu;
> +
> +	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
> +	if (!cd)
> +		return ERR_PTR(-ENOMEM);
> +
> +	asid = ida_simple_get(&asid_ida, 0, 1 << ctx_cfg->asid_bits,
> +			      GFP_KERNEL);
> +	if (asid < 0) {
> +		kfree(cd);
> +		return ERR_PTR(asid);
> +	}
> +
> +	cd->entry.tag = asid;
> +
> +	switch (fmt) {
> +	case ARM_64_LPAE_S1:
> +		cd->ttbr	= cfg->arm_lpae_s1_cfg.ttbr[0];
> +		cd->tcr		= cfg->arm_lpae_s1_cfg.tcr;
> +		cd->mair	= cfg->arm_lpae_s1_cfg.mair[0];
> +		break;
> +	default:
> +		pr_err("Unsupported pgtable format 0x%x\n", fmt);
> +		ret = -EINVAL;
> +		goto err_free_asid;
> +	}
> +
> +	return &cd->entry;
> +
> +err_free_asid:
> +	ida_simple_remove(&asid_ida, asid);
> +
> +	kfree(cd);
> +
> +	return ERR_PTR(ret);
> +}
> +
> +static void arm_smmu_free_cd(struct iommu_pasid_table_ops *ops,
> +			     struct iommu_pasid_entry *entry)
> +{
> +	struct arm_smmu_cd *cd = pasid_entry_to_cd(entry);
> +
> +	ida_simple_remove(&asid_ida, (u16)entry->tag);
> +	kfree(cd);
> +}
> +
> +static int arm_smmu_set_cd(struct iommu_pasid_table_ops *ops, int pasid,
> +			   struct iommu_pasid_entry *entry)
> +{
> +	struct arm_smmu_cd_tables *tbl = pasid_ops_to_tables(ops);
> +	struct arm_smmu_cd *cd = pasid_entry_to_cd(entry);
> +
> +	if (WARN_ON(pasid > (1 << tbl->pasid.cfg.order)))
> +		return -EINVAL;
> +
> +	return arm_smmu_write_ctx_desc(tbl, pasid, cd);
> +}
> +
> +static void arm_smmu_clear_cd(struct iommu_pasid_table_ops *ops, int pasid,
> +			      struct iommu_pasid_entry *entry)
> +{
> +	struct arm_smmu_cd_tables *tbl = pasid_ops_to_tables(ops);
> +
> +	if (WARN_ON(pasid > (1 << tbl->pasid.cfg.order)))
> +		return;
> +
> +	arm_smmu_write_ctx_desc(tbl, pasid, NULL);
> +}
> +
> +static struct iommu_pasid_table *
> +arm_smmu_alloc_cd_tables(struct iommu_pasid_table_cfg *cfg, void *cookie)
> +{
> +	struct arm_smmu_cd_tables *tbl;
> +	struct device *dev = cfg->iommu_dev;
> +
> +	if (cfg->order) {
> +		/* TODO: support SSID */
> +		return NULL;
> +	}
> +
> +	tbl = devm_kzalloc(dev, sizeof(*tbl), GFP_KERNEL);
> +	if (!tbl)
> +		return NULL;
> +
> +	tbl->ptr = dmam_alloc_coherent(dev, CTXDESC_CD_DWORDS << 3,
> +				       &tbl->ptr_dma, GFP_KERNEL | __GFP_ZERO);
> +	if (!tbl->ptr) {
> +		dev_warn(dev, "failed to allocate context descriptor\n");
> +		goto err_free_tbl;
> +	}
> +
> +	tbl->pasid.ops = (struct iommu_pasid_table_ops) {
> +		.alloc_priv_entry	= arm_smmu_alloc_priv_cd,
> +		.alloc_shared_entry	= arm_smmu_alloc_shared_cd,
> +		.free_entry		= arm_smmu_free_cd,
> +		.set_entry		= arm_smmu_set_cd,
> +		.clear_entry		= arm_smmu_clear_cd,
> +	};
> +
> +	cfg->base		= tbl->ptr_dma;
> +	cfg->arm_smmu.s1fmt	= ARM_SMMU_S1FMT_LINEAR;
> +
> +	return &tbl->pasid;
> +
> +err_free_tbl:
> +	devm_kfree(dev, tbl);
> +
> +	return NULL;
> +}
> +
> +static void arm_smmu_free_cd_tables(struct iommu_pasid_table *pasid_table)
> +{
> +	struct iommu_pasid_table_cfg *cfg = &pasid_table->cfg;
> +	struct device *dev = cfg->iommu_dev;
> +	struct arm_smmu_cd_tables *tbl = pasid_to_cd_tables(pasid_table);
> +
> +	dmam_free_coherent(dev, CTXDESC_CD_DWORDS << 3,
> +			   tbl->ptr, tbl->ptr_dma);
> +	devm_kfree(dev, tbl);
> +}
> +
> +struct iommu_pasid_init_fns arm_smmu_v3_pasid_init_fns = {
> +	.alloc	= arm_smmu_alloc_cd_tables,
> +	.free	= arm_smmu_free_cd_tables,
> +};
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index fb2507ffcdaf..b6d8c90fafb3 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -40,6 +40,7 @@
>  #include <linux/amba/bus.h>
>  
>  #include "io-pgtable.h"
> +#include "iommu-pasid.h"
>  
>  /* MMIO registers */
>  #define ARM_SMMU_IDR0			0x0
> @@ -281,60 +282,6 @@
>  #define STRTAB_STE_3_S2TTB_SHIFT	4
>  #define STRTAB_STE_3_S2TTB_MASK		0xfffffffffffUL
>  
> -/* Context descriptor (stage-1 only) */
> -#define CTXDESC_CD_DWORDS		8
> -#define CTXDESC_CD_0_TCR_T0SZ_SHIFT	0
> -#define ARM64_TCR_T0SZ_SHIFT		0
> -#define ARM64_TCR_T0SZ_MASK		0x1fUL
> -#define CTXDESC_CD_0_TCR_TG0_SHIFT	6
> -#define ARM64_TCR_TG0_SHIFT		14
> -#define ARM64_TCR_TG0_MASK		0x3UL
> -#define CTXDESC_CD_0_TCR_IRGN0_SHIFT	8
> -#define ARM64_TCR_IRGN0_SHIFT		8
> -#define ARM64_TCR_IRGN0_MASK		0x3UL
> -#define CTXDESC_CD_0_TCR_ORGN0_SHIFT	10
> -#define ARM64_TCR_ORGN0_SHIFT		10
> -#define ARM64_TCR_ORGN0_MASK		0x3UL
> -#define CTXDESC_CD_0_TCR_SH0_SHIFT	12
> -#define ARM64_TCR_SH0_SHIFT		12
> -#define ARM64_TCR_SH0_MASK		0x3UL
> -#define CTXDESC_CD_0_TCR_EPD0_SHIFT	14
> -#define ARM64_TCR_EPD0_SHIFT		7
> -#define ARM64_TCR_EPD0_MASK		0x1UL
> -#define CTXDESC_CD_0_TCR_EPD1_SHIFT	30
> -#define ARM64_TCR_EPD1_SHIFT		23
> -#define ARM64_TCR_EPD1_MASK		0x1UL
> -
> -#define CTXDESC_CD_0_ENDI		(1UL << 15)
> -#define CTXDESC_CD_0_V			(1UL << 31)
> -
> -#define CTXDESC_CD_0_TCR_IPS_SHIFT	32
> -#define ARM64_TCR_IPS_SHIFT		32
> -#define ARM64_TCR_IPS_MASK		0x7UL
> -#define CTXDESC_CD_0_TCR_TBI0_SHIFT	38
> -#define ARM64_TCR_TBI0_SHIFT		37
> -#define ARM64_TCR_TBI0_MASK		0x1UL
> -
> -#define CTXDESC_CD_0_AA64		(1UL << 41)
> -#define CTXDESC_CD_0_S			(1UL << 44)
> -#define CTXDESC_CD_0_R			(1UL << 45)
> -#define CTXDESC_CD_0_A			(1UL << 46)
> -#define CTXDESC_CD_0_ASET_SHIFT		47
> -#define CTXDESC_CD_0_ASET_SHARED	(0UL << CTXDESC_CD_0_ASET_SHIFT)
> -#define CTXDESC_CD_0_ASET_PRIVATE	(1UL << CTXDESC_CD_0_ASET_SHIFT)
> -#define CTXDESC_CD_0_ASID_SHIFT		48
> -#define CTXDESC_CD_0_ASID_MASK		0xffffUL
> -
> -#define CTXDESC_CD_1_TTB0_SHIFT		4
> -#define CTXDESC_CD_1_TTB0_MASK		0xfffffffffffUL
> -
> -#define CTXDESC_CD_3_MAIR_SHIFT		0
> -
> -/* Convert between AArch64 (CPU) TCR format and SMMU CD format */
> -#define ARM_SMMU_TCR2CD(tcr, fld)					\
> -	(((tcr) >> ARM64_TCR_##fld##_SHIFT & ARM64_TCR_##fld##_MASK)	\
> -	 << CTXDESC_CD_0_TCR_##fld##_SHIFT)
> -
>  /* Command queue */
>  #define CMDQ_ENT_DWORDS			2
>  #define CMDQ_MAX_SZ_SHIFT		8
> @@ -353,6 +300,8 @@
>  #define CMDQ_PREFETCH_1_SIZE_SHIFT	0
>  #define CMDQ_PREFETCH_1_ADDR_MASK	~0xfffUL
>  
> +#define CMDQ_CFGI_0_SSID_SHIFT		12
> +#define CMDQ_CFGI_0_SSID_MASK		0xfffffUL
>  #define CMDQ_CFGI_0_SID_SHIFT		32
>  #define CMDQ_CFGI_0_SID_MASK		0xffffffffUL
>  #define CMDQ_CFGI_1_LEAF		(1UL << 0)
> @@ -476,8 +425,11 @@ struct arm_smmu_cmdq_ent {
>  
>  		#define CMDQ_OP_CFGI_STE	0x3
>  		#define CMDQ_OP_CFGI_ALL	0x4
> +		#define CMDQ_OP_CFGI_CD		0x5
> +		#define CMDQ_OP_CFGI_CD_ALL	0x6
>  		struct {
>  			u32			sid;
> +			u32			ssid;
>  			union {
>  				bool		leaf;
>  				u8		span;
> @@ -552,15 +504,9 @@ struct arm_smmu_strtab_l1_desc {
>  };
>  
>  struct arm_smmu_s1_cfg {
> -	__le64				*cdptr;
> -	dma_addr_t			cdptr_dma;
> -
> -	struct arm_smmu_ctx_desc {
> -		u16	asid;
> -		u64	ttbr;
> -		u64	tcr;
> -		u64	mair;
> -	}				cd;
> +	struct iommu_pasid_table_cfg	tables;
> +	struct iommu_pasid_table_ops	*ops;
> +	struct iommu_pasid_entry	*cd0; /* Default context */
>  };
>  
>  struct arm_smmu_s2_cfg {
> @@ -629,9 +575,7 @@ struct arm_smmu_device {
>  	unsigned long			oas; /* PA */
>  	unsigned long			pgsize_bitmap;
>  
> -#define ARM_SMMU_MAX_ASIDS		(1 << 16)
>  	unsigned int			asid_bits;
> -	DECLARE_BITMAP(asid_map, ARM_SMMU_MAX_ASIDS);
>  
>  #define ARM_SMMU_MAX_VMIDS		(1 << 16)
>  	unsigned int			vmid_bits;
> @@ -855,10 +799,16 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>  		cmd[1] |= ent->prefetch.size << CMDQ_PREFETCH_1_SIZE_SHIFT;
>  		cmd[1] |= ent->prefetch.addr & CMDQ_PREFETCH_1_ADDR_MASK;
>  		break;
> +	case CMDQ_OP_CFGI_CD:
> +		cmd[0] |= ent->cfgi.ssid << CMDQ_CFGI_0_SSID_SHIFT;
> +		/* Fallthrough */
>  	case CMDQ_OP_CFGI_STE:
>  		cmd[0] |= (u64)ent->cfgi.sid << CMDQ_CFGI_0_SID_SHIFT;
>  		cmd[1] |= ent->cfgi.leaf ? CMDQ_CFGI_1_LEAF : 0;
>  		break;
> +	case CMDQ_OP_CFGI_CD_ALL:
> +		cmd[0] |= (u64)ent->cfgi.sid << CMDQ_CFGI_0_SID_SHIFT;
> +		break;
>  	case CMDQ_OP_CFGI_ALL:
>  		/* Cover the entire SID range */
>  		cmd[1] |= CMDQ_CFGI_1_RANGE_MASK << CMDQ_CFGI_1_RANGE_SHIFT;
> @@ -1059,54 +1009,6 @@ static void arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>  		dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
>  }
>  
> -/* Context descriptor manipulation functions */
> -static u64 arm_smmu_cpu_tcr_to_cd(u64 tcr)
> -{
> -	u64 val = 0;
> -
> -	/* Repack the TCR. Just care about TTBR0 for now */
> -	val |= ARM_SMMU_TCR2CD(tcr, T0SZ);
> -	val |= ARM_SMMU_TCR2CD(tcr, TG0);
> -	val |= ARM_SMMU_TCR2CD(tcr, IRGN0);
> -	val |= ARM_SMMU_TCR2CD(tcr, ORGN0);
> -	val |= ARM_SMMU_TCR2CD(tcr, SH0);
> -	val |= ARM_SMMU_TCR2CD(tcr, EPD0);
> -	val |= ARM_SMMU_TCR2CD(tcr, EPD1);
> -	val |= ARM_SMMU_TCR2CD(tcr, IPS);
> -	val |= ARM_SMMU_TCR2CD(tcr, TBI0);
> -
> -	return val;
> -}
> -
> -static void arm_smmu_write_ctx_desc(struct arm_smmu_device *smmu,
> -				    struct arm_smmu_s1_cfg *cfg)
> -{
> -	u64 val;
> -
> -	/*
> -	 * We don't need to issue any invalidation here, as we'll invalidate
> -	 * the STE when installing the new entry anyway.
> -	 */
> -	val = arm_smmu_cpu_tcr_to_cd(cfg->cd.tcr) |
> -#ifdef __BIG_ENDIAN
> -	      CTXDESC_CD_0_ENDI |
> -#endif
> -	      CTXDESC_CD_0_R | CTXDESC_CD_0_A | CTXDESC_CD_0_ASET_PRIVATE |
> -	      CTXDESC_CD_0_AA64 | (u64)cfg->cd.asid << CTXDESC_CD_0_ASID_SHIFT |
> -	      CTXDESC_CD_0_V;
> -
> -	/* STALL_MODEL==0b10 && CD.S==0 is ILLEGAL */
> -	if (smmu->features & ARM_SMMU_FEAT_STALL_FORCE)
> -		val |= CTXDESC_CD_0_S;
> -
> -	cfg->cdptr[0] = cpu_to_le64(val);
> -
> -	val = cfg->cd.ttbr & CTXDESC_CD_1_TTB0_MASK << CTXDESC_CD_1_TTB0_SHIFT;
> -	cfg->cdptr[1] = cpu_to_le64(val);
> -
> -	cfg->cdptr[3] = cpu_to_le64(cfg->cd.mair << CTXDESC_CD_3_MAIR_SHIFT);
> -}
> -
>  /* Stream table manipulation functions */
>  static void
>  arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
> @@ -1222,7 +1124,7 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
>  		   !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE))
>  			dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD);
>  
> -		val |= (ste->s1_cfg->cdptr_dma & STRTAB_STE_0_S1CTXPTR_MASK
> +		val |= (ste->s1_cfg->tables.base & STRTAB_STE_0_S1CTXPTR_MASK
>  		        << STRTAB_STE_0_S1CTXPTR_SHIFT) |
>  			STRTAB_STE_0_CFG_S1_TRANS;
>  	}
> @@ -1466,8 +1368,10 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>  	struct arm_smmu_cmdq_ent cmd;
>  
>  	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
> +		if (unlikely(!smmu_domain->s1_cfg.cd0))
> +			return;
>  		cmd.opcode	= CMDQ_OP_TLBI_NH_ASID;
> -		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
> +		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd0->tag;
>  		cmd.tlbi.vmid	= 0;
>  	} else {
>  		cmd.opcode	= CMDQ_OP_TLBI_S12_VMALL;
> @@ -1491,8 +1395,10 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
>  	};
>  
>  	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
> +		if (unlikely(!smmu_domain->s1_cfg.cd0))
> +			return;
>  		cmd.opcode	= CMDQ_OP_TLBI_NH_VA;
> -		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
> +		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd0->tag;
>  	} else {
>  		cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
>  		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
> @@ -1510,6 +1416,71 @@ static const struct iommu_gather_ops arm_smmu_gather_ops = {
>  	.tlb_sync	= arm_smmu_tlb_sync,
>  };
>  
> +/* PASID TABLE API */
> +static void __arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
> +			       struct arm_smmu_cmdq_ent *cmd)
> +{
> +	size_t i;
> +	unsigned long flags;
> +	struct arm_smmu_master_data *master;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +
> +	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> +	list_for_each_entry(master, &smmu_domain->devices, list) {
> +		struct iommu_fwspec *fwspec = master->dev->iommu_fwspec;
> +
> +		for (i = 0; i < fwspec->num_ids; i++) {
> +			cmd->cfgi.sid = fwspec->ids[i];
> +			arm_smmu_cmdq_issue_cmd(smmu, cmd);
> +		}
> +	}
> +	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> +
> +	__arm_smmu_tlb_sync(smmu);
> +}
> +
> +static void arm_smmu_sync_cd(void *cookie, int ssid, bool leaf)
> +{
> +	struct arm_smmu_cmdq_ent cmd = {
> +		.opcode	= CMDQ_OP_CFGI_CD_ALL,
> +		.cfgi	= {
> +			.ssid	= ssid,
> +			.leaf	= leaf,
> +		},
> +	};
> +
> +	__arm_smmu_sync_cd(cookie, &cmd);
> +}
> +
> +static void arm_smmu_sync_cd_all(void *cookie)
> +{
> +	struct arm_smmu_cmdq_ent cmd = {
> +		.opcode	= CMDQ_OP_CFGI_CD_ALL,
> +	};
> +
> +	__arm_smmu_sync_cd(cookie, &cmd);
> +}
> +
> +static void arm_smmu_tlb_inv_ssid(void *cookie, int ssid,
> +				  struct iommu_pasid_entry *entry)
> +{
> +	struct arm_smmu_domain *smmu_domain = cookie;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_cmdq_ent cmd = {
> +		.opcode		= CMDQ_OP_TLBI_NH_ASID,
> +		.tlbi.asid	= entry->tag,
> +	};
> +
> +	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	__arm_smmu_tlb_sync(smmu);
> +}
> +
> +static struct iommu_pasid_sync_ops arm_smmu_ctx_sync = {
> +	.cfg_flush	= arm_smmu_sync_cd,
> +	.cfg_flush_all	= arm_smmu_sync_cd_all,
> +	.tlb_flush	= arm_smmu_tlb_inv_ssid,
> +};
> +
>  /* IOMMU API */
>  static bool arm_smmu_capable(enum iommu_cap cap)
>  {
> @@ -1582,15 +1553,11 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>  
>  	/* Free the CD and ASID, if we allocated them */
>  	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
> -		struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
> -
> -		if (cfg->cdptr) {
> -			dmam_free_coherent(smmu_domain->smmu->dev,
> -					   CTXDESC_CD_DWORDS << 3,
> -					   cfg->cdptr,
> -					   cfg->cdptr_dma);
> +		struct iommu_pasid_table_ops *ops = smmu_domain->s1_cfg.ops;
>  
> -			arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid);
> +		if (ops) {
> +			ops->free_entry(ops, smmu_domain->s1_cfg.cd0);
> +			iommu_free_pasid_ops(ops);
>  		}
>  	} else {
>  		struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
> @@ -1605,31 +1572,42 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
>  				       struct io_pgtable_cfg *pgtbl_cfg)
>  {
>  	int ret;
> -	int asid;
> -	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct iommu_pasid_entry *entry;
> +	struct iommu_pasid_table_ops *ops;
>  	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct iommu_pasid_table_cfg pasid_cfg = {
> +		.iommu_dev		= smmu->dev,
> +		.sync			= &arm_smmu_ctx_sync,
> +		.arm_smmu = {
> +			.stall		= !!(smmu->features & ARM_SMMU_FEAT_STALL_FORCE),
> +			.asid_bits	= smmu->asid_bits,
> +		},
> +	};
>  
> -	asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits);
> -	if (asid < 0)
> -		return asid;
> +	ops = iommu_alloc_pasid_ops(PASID_TABLE_ARM_SMMU_V3, &pasid_cfg,
> +				    smmu_domain);
> +	if (!ops)
> +		return -ENOMEM;
>  
> -	cfg->cdptr = dmam_alloc_coherent(smmu->dev, CTXDESC_CD_DWORDS << 3,
> -					 &cfg->cdptr_dma,
> -					 GFP_KERNEL | __GFP_ZERO);
> -	if (!cfg->cdptr) {
> -		dev_warn(smmu->dev, "failed to allocate context descriptor\n");
> -		ret = -ENOMEM;
> -		goto out_free_asid;
> +	/* Create default entry */
> +	entry = ops->alloc_priv_entry(ops, ARM_64_LPAE_S1, pgtbl_cfg);
> +	if (IS_ERR(entry)) {
> +		iommu_free_pasid_ops(ops);
> +		return PTR_ERR(entry);
>  	}
>  
> -	cfg->cd.asid	= (u16)asid;
> -	cfg->cd.ttbr	= pgtbl_cfg->arm_lpae_s1_cfg.ttbr[0];
> -	cfg->cd.tcr	= pgtbl_cfg->arm_lpae_s1_cfg.tcr;
> -	cfg->cd.mair	= pgtbl_cfg->arm_lpae_s1_cfg.mair[0];
> -	return 0;
> +	ret = ops->set_entry(ops, 0, entry);
> +	if (ret) {
> +		ops->free_entry(ops, entry);
> +		iommu_free_pasid_ops(ops);
> +		return ret;
> +	}
> +
> +	cfg->tables	= pasid_cfg;
> +	cfg->ops	= ops;
> +	cfg->cd0	= entry;
>  
> -out_free_asid:
> -	arm_smmu_bitmap_free(smmu->asid_map, asid);
>  	return ret;
>  }
>  
> @@ -1832,7 +1810,6 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
>  	} else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
>  		ste->s1_cfg = &smmu_domain->s1_cfg;
>  		ste->s2_cfg = NULL;
> -		arm_smmu_write_ctx_desc(smmu, ste->s1_cfg);
>  	} else {
>  		ste->s1_cfg = NULL;
>  		ste->s2_cfg = &smmu_domain->s2_cfg;
> diff --git a/drivers/iommu/iommu-pasid.c b/drivers/iommu/iommu-pasid.c
> index 6b21d369d514..239b91e18543 100644
> --- a/drivers/iommu/iommu-pasid.c
> +++ b/drivers/iommu/iommu-pasid.c
> @@ -13,6 +13,7 @@
>  
>  static const struct iommu_pasid_init_fns *
>  pasid_table_init_fns[PASID_TABLE_NUM_FMTS] = {
> +	[PASID_TABLE_ARM_SMMU_V3] = &arm_smmu_v3_pasid_init_fns,
>  };
>  
>  struct iommu_pasid_table_ops *
> diff --git a/drivers/iommu/iommu-pasid.h b/drivers/iommu/iommu-pasid.h
> index 40a27d35c1e0..77e449a1655b 100644
> --- a/drivers/iommu/iommu-pasid.h
> +++ b/drivers/iommu/iommu-pasid.h
> @@ -15,6 +15,7 @@
>  struct mm_struct;
>  
>  enum iommu_pasid_table_fmt {
> +	PASID_TABLE_ARM_SMMU_V3,
>  	PASID_TABLE_NUM_FMTS,
>  };
>  
> @@ -73,6 +74,25 @@ struct iommu_pasid_sync_ops {
>  			  struct iommu_pasid_entry *entry);
>  };
>  
> +/**
> + * arm_smmu_context_cfg - PASID table configuration for ARM SMMU v3
> + *
> + * SMMU properties:
> + * @stall:	devices attached to the domain are allowed to stall.
> + * @asid_bits:	number of ASID bits supported by the SMMU
> + *
> + * @s1fmt:	PASID table format, chosen by the allocator.
> + */
> +struct arm_smmu_context_cfg {
> +	u8				stall:1;
> +	u8				asid_bits;
> +
> +#define ARM_SMMU_S1FMT_LINEAR		0x0
> +#define ARM_SMMU_S1FMT_4K_L2		0x1
> +#define ARM_SMMU_S1FMT_64K_L2		0x2
> +	u8				s1fmt;
> +};
> +
>  /**
>   * struct iommu_pasid_table_cfg - Configuration data for a set of PASID tables.
>   *
> @@ -88,6 +108,11 @@ struct iommu_pasid_table_cfg {
>  	const struct iommu_pasid_sync_ops *sync;
>  
>  	dma_addr_t			base;
> +
> +	/* Low-level data specific to the IOMMU */
> +	union {
> +		struct arm_smmu_context_cfg arm_smmu;
> +	};
>  };
>  
>  struct iommu_pasid_table_ops *
> @@ -139,4 +164,6 @@ static inline void iommu_pasid_flush_tlbs(struct iommu_pasid_table *table,
>  	table->cfg.sync->tlb_flush(table->cookie, pasid, entry);
>  }
>  
> +extern struct iommu_pasid_init_fns arm_smmu_v3_pasid_init_fns;
> +
>  #endif /* __IOMMU_PASID_H */

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sinan Kaya April 10, 2018, 6:53 p.m. UTC | #46
On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
> +static void io_mm_detach_all_locked(struct iommu_bond *bond)
> +{
> +	while (!io_mm_detach_locked(bond));
> +}
> +

I don't remember if I mentioned this before or not but I think this loop
needs a little bit relaxation with yield and maybe an informational message
with might help if wait exceeds some time.
Jean-Philippe Brucker April 13, 2018, 10:59 a.m. UTC | #47
On 10/04/18 19:53, Sinan Kaya wrote:
> On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
>> +static void io_mm_detach_all_locked(struct iommu_bond *bond)
>> +{
>> +	while (!io_mm_detach_locked(bond));
>> +}
>> +
> 
> I don't remember if I mentioned this before or not but I think this loop
> needs a little bit relaxation with yield and maybe an informational message
> with might help if wait exceeds some time.

Right, at the very least we should have a cpu_relax here. I think this
bit is going away, though, because I want to lift the possibility of
calling bind() for the same dev/mm pair multiple times. It's not useful
in my opinion because that call could only be issued by a given driver.

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sinan Kaya April 24, 2018, 1:32 a.m. UTC | #48
On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
> /**
>   * iommu_sva_device_init() - Initialize Shared Virtual Addressing for a device
>   * @dev: the device
> @@ -129,7 +439,10 @@ EXPORT_SYMBOL_GPL(iommu_sva_device_shutdown);
>  int iommu_sva_bind_device(struct device *dev, struct mm_struct *mm, int *pasid,
>  			  unsigned long flags, void *drvdata)
>  {
> +	int i, ret;
> +	struct io_mm *io_mm = NULL;
>  	struct iommu_domain *domain;
> +	struct iommu_bond *bond = NULL, *tmp;
>  	struct iommu_param *dev_param = dev->iommu_param;
>  
>  	domain = iommu_get_domain_for_dev(dev);
> @@ -145,7 +458,42 @@ int iommu_sva_bind_device(struct device *dev, struct mm_struct *mm, int *pasid,
>  	if (flags != (IOMMU_SVA_FEAT_PASID | IOMMU_SVA_FEAT_IOPF))
>  		return -EINVAL;
>  
> -	return -ENOSYS; /* TODO */
> +	/* If an io_mm already exists, use it */
> +	spin_lock(&iommu_sva_lock);
> +	idr_for_each_entry(&iommu_pasid_idr, io_mm, i) {
> +		if (io_mm->mm != mm || !io_mm_get_locked(io_mm))
> +			continue;
> +
> +		/* Is it already bound to this device? */
> +		list_for_each_entry(tmp, &io_mm->devices, mm_head) {
> +			if (tmp->dev != dev)
> +				continue;
> +
> +			bond = tmp;
> +			refcount_inc(&bond->refs);
> +			io_mm_put_locked(io_mm);
> +			break;
> +		}
> +		break;
> +	}
> +	spin_unlock(&iommu_sva_lock);
> +
> +	if (bond)

Please return pasid when you find an io_mm that is already bound. Something like
*pasid = io_mm->pasid should do the work here when bond is true.

> +		return 0;
Jean-Philippe Brucker April 24, 2018, 9:33 a.m. UTC | #49
On 24/04/18 02:32, Sinan Kaya wrote:
> On 2/12/2018 1:33 PM, Jean-Philippe Brucker wrote:
>> /**
>>   * iommu_sva_device_init() - Initialize Shared Virtual Addressing for a device
>>   * @dev: the device
>> @@ -129,7 +439,10 @@ EXPORT_SYMBOL_GPL(iommu_sva_device_shutdown);
>>  int iommu_sva_bind_device(struct device *dev, struct mm_struct *mm, int *pasid,
>>  			  unsigned long flags, void *drvdata)
>>  {
>> +	int i, ret;
>> +	struct io_mm *io_mm = NULL;
>>  	struct iommu_domain *domain;
>> +	struct iommu_bond *bond = NULL, *tmp;
>>  	struct iommu_param *dev_param = dev->iommu_param;
>>  
>>  	domain = iommu_get_domain_for_dev(dev);
>> @@ -145,7 +458,42 @@ int iommu_sva_bind_device(struct device *dev, struct mm_struct *mm, int *pasid,
>>  	if (flags != (IOMMU_SVA_FEAT_PASID | IOMMU_SVA_FEAT_IOPF))
>>  		return -EINVAL;
>>  
>> -	return -ENOSYS; /* TODO */
>> +	/* If an io_mm already exists, use it */
>> +	spin_lock(&iommu_sva_lock);
>> +	idr_for_each_entry(&iommu_pasid_idr, io_mm, i) {
>> +		if (io_mm->mm != mm || !io_mm_get_locked(io_mm))
>> +			continue;
>> +
>> +		/* Is it already bound to this device? */
>> +		list_for_each_entry(tmp, &io_mm->devices, mm_head) {
>> +			if (tmp->dev != dev)
>> +				continue;
>> +
>> +			bond = tmp;
>> +			refcount_inc(&bond->refs);
>> +			io_mm_put_locked(io_mm);
>> +			break;
>> +		}
>> +		break;
>> +	}
>> +	spin_unlock(&iommu_sva_lock);
>> +
>> +	if (bond)
> 
> Please return pasid when you find an io_mm that is already bound. Something like
> *pasid = io_mm->pasid should do the work here when bond is true.

Right. I think we should also keep returning 0, not switch to -EEXIST or
similar. So in next version a driver can call bind(devX, mmY) multiple
times, but the first unbind() removes the bond.

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sinan Kaya April 24, 2018, 5:17 p.m. UTC | #50
On 4/24/2018 5:33 AM, Jean-Philippe Brucker wrote:
>> Please return pasid when you find an io_mm that is already bound. Something like
>> *pasid = io_mm->pasid should do the work here when bond is true.
> Right. I think we should also keep returning 0, not switch to -EEXIST or
> similar. So in next version a driver can call bind(devX, mmY) multiple
> times, but the first unbind() removes the bond.

If we are going to allow multiple binds, then the last unbind should
remove the bond rather than the first one via reference counting.
Jean-Philippe Brucker April 24, 2018, 6:52 p.m. UTC | #51
On 24/04/18 18:17, Sinan Kaya wrote:
> On 4/24/2018 5:33 AM, Jean-Philippe Brucker wrote:
>>> Please return pasid when you find an io_mm that is already bound. Something like
>>> *pasid = io_mm->pasid should do the work here when bond is true.
>> Right. I think we should also keep returning 0, not switch to -EEXIST or
>> similar. So in next version a driver can call bind(devX, mmY) multiple
>> times, but the first unbind() removes the bond.
> 
> If we are going to allow multiple binds, then the last unbind should
> remove the bond rather than the first one via reference counting.

Yeah that's probably better. Since a bond belongs to a device driver it
doesn't need multiple bind/unbind, so earlier in this thread (1/37) I
talked about removing the bond->refs. But thinking about it, there still
is a need for it. When mm exits, we now need to call the device driver's
mm_exit handler outside of the spinlock, so we have to take a ref in
order to prevent a concurrent unbind() from freeing the bond.

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html