[{"id":1770284,"web_url":"http://patchwork.ozlabs.org/comment/1770284/","msgid":"<1505751137.11871.2.camel@redhat.com>","list_archive_url":null,"date":"2017-09-18T16:12:17","subject":"Re: [PATCH 06/16] thunderbolt: Add support for XDomain discovery\n\tprotocol","submitter":{"id":665,"url":"http://patchwork.ozlabs.org/api/people/665/","name":"Dan Williams","email":"dcbw@redhat.com"},"content":"On Mon, 2017-09-18 at 18:30 +0300, Mika Westerberg wrote:\n> When two hosts are connected over a Thunderbolt cable, there is a\n> protocol they can use to communicate capabilities supported by the\n> host.\n> The discovery protocol uses automatically configured control channel\n> (ring 0) and is build on top of request/response transactions using\n> special XDomain primitives provided by the Thunderbolt base protocol.\n> \n> The capabilities consists of a root directory block of basic\n> properties\n> used for identification of the host, and then there can be zero or\n> more\n> directories each describing a Thunderbolt service and its\n> capabilities.\n> \n> Once both sides have discovered what is supported the two hosts can\n> setup high-speed DMA paths and transfer data to the other side using\n> whatever protocol was agreed based on the properties. The software\n> protocol used to communicate which DMA paths to enable is service\n> specific.\n> \n> This patch adds support for the XDomain discovery protocol to the\n> Thunderbolt bus. We model each remote host connection as a Linux\n> XDomain\n> device. For each Thunderbolt service found supported on the XDomain\n> device, we create Linux Thunderbolt service device which Thunderbolt\n> service drivers can then bind to based on the protocol identification\n> information retrieved from the property directory describing the\n> service.\n> \n> This code is based on the work done by Amir Levy and Michael Jamet.\n> \n> Signed-off-by: Michael Jamet <michael.jamet@intel.com>\n> Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>\n> Reviewed-by: Yehezkel Bernat <yehezkel.bernat@intel.com>\n> ---\n>  Documentation/ABI/testing/sysfs-bus-thunderbolt |   48 +\n>  drivers/thunderbolt/Makefile                    |    2 +-\n>  drivers/thunderbolt/ctl.c                       |   11 +-\n>  drivers/thunderbolt/ctl.h                       |    2 +-\n>  drivers/thunderbolt/domain.c                    |  197 ++-\n>  drivers/thunderbolt/icm.c                       |  218 +++-\n>  drivers/thunderbolt/nhi.h                       |    2 +\n>  drivers/thunderbolt/switch.c                    |    7 +-\n>  drivers/thunderbolt/tb.h                        |   39 +-\n>  drivers/thunderbolt/tb_msgs.h                   |  123 ++\n>  drivers/thunderbolt/xdomain.c                   | 1576\n> +++++++++++++++++++++++\n>  include/linux/mod_devicetable.h                 |   26 +\n>  include/linux/thunderbolt.h                     |  242 ++++\n>  scripts/mod/devicetable-offsets.c               |    7 +\n>  scripts/mod/file2alias.c                        |   25 +\n>  15 files changed, 2507 insertions(+), 18 deletions(-)\n>  create mode 100644 drivers/thunderbolt/xdomain.c\n> \n> diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt\n> b/Documentation/ABI/testing/sysfs-bus-thunderbolt\n> index 392bef5bd399..cb48850bd79b 100644\n> --- a/Documentation/ABI/testing/sysfs-bus-thunderbolt\n> +++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt\n> @@ -110,3 +110,51 @@ Description:\tWhen new NVM image is\n> written to the non-active NVM\n>  \t\tis directly the status value from the DMA\n> configuration\n>  \t\tbased mailbox before the device is power cycled.\n> Writing\n>  \t\t0 here clears the status.\n> +\n> +What:\t\t/sys/bus/thunderbolt/devices/<xdomain>.<service\n> >/key\n> +Date:\t\tDec 2017\n> +KernelVersion:\t4.14\n> +Contact:\tthunderbolt-software@lists.01.org\n> +Description:\tThis contains name of the property directory the\n> XDomain\n> +\t\tservice exposes. This entry describes the protocol\n> in\n> +\t\tquestion. Following directories are already reserved\n> by\n> +\t\tthe Apple XDomain specification:\n> +\n> +\t\tnetwork:  IP/ethernet over Thunderbolt\n> +\t\ttargetdm: Target disk mode protocol over Thunderbolt\n> +\t\textdisp:  External display mode protocol over\n> Thunderbolt\n> +\n> +What:\t\t/sys/bus/thunderbolt/devices/<xdomain>.<service\n> >/modalias\n> +Date:\t\tDec 2017\n> +KernelVersion:\t4.14\n> +Contact:\tthunderbolt-software@lists.01.org\n> +Description:\tStores the same MODALIAS value emitted by uevent\n> for\n> +\t\tthe XDomain service. Format: tbtsvc:kSpNvNrN\n> +\n> +What:\t\t/sys/bus/thunderbolt/devices/<xdomain>.<service\n> >/prtcid\n> +Date:\t\tDec 2017\n> +KernelVersion:\t4.14\n> +Contact:\tthunderbolt-software@lists.01.org\n> +Description:\tThis contains XDomain protocol identifier the\n> XDomain\n> +\t\tservice supports.\n> +\n> +What:\t\t/sys/bus/thunderbolt/devices/<xdomain>.<service\n> >/prtcvers\n> +Date:\t\tDec 2017\n> +KernelVersion:\t4.14\n> +Contact:\tthunderbolt-software@lists.01.org\n> +Description:\tThis contains XDomain protocol version the\n> XDomain\n> +\t\tservice supports.\n> +\n> +What:\t\t/sys/bus/thunderbolt/devices/<xdomain>.<service\n> >/prtcrevs\n> +Date:\t\tDec 2017\n> +KernelVersion:\t4.14\n> +Contact:\tthunderbolt-software@lists.01.org\n> +Description:\tThis contains XDomain software version the\n> XDomain\n> +\t\tservice supports.\n> +\n> +What:\t\t/sys/bus/thunderbolt/devices/<xdomain>.<service\n> >/prtcstns\n> +Date:\t\tDec 2017\n> +KernelVersion:\t4.14\n> +Contact:\tthunderbolt-software@lists.01.org\n> +Description:\tThis contains XDomain service specific settings\n> as\n> +\t\tbitmask. Format: %x\n> diff --git a/drivers/thunderbolt/Makefile\n> b/drivers/thunderbolt/Makefile\n> index 7afd21f5383a..f2f0de27252b 100644\n> --- a/drivers/thunderbolt/Makefile\n> +++ b/drivers/thunderbolt/Makefile\n> @@ -1,3 +1,3 @@\n>  obj-${CONFIG_THUNDERBOLT} := thunderbolt.o\n>  thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o\n> tunnel_pci.o eeprom.o\n> -thunderbolt-objs += domain.o dma_port.o icm.o property.o\n> +thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o\n> diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c\n> index e6a4c9458c76..46e393c5fd1d 100644\n> --- a/drivers/thunderbolt/ctl.c\n> +++ b/drivers/thunderbolt/ctl.c\n> @@ -368,10 +368,10 @@ static int tb_ctl_tx(struct tb_ctl *ctl, const\n> void *data, size_t len,\n>  /**\n>   * tb_ctl_handle_event() - acknowledge a plug event, invoke ctl-\n> >callback\n>   */\n> -static void tb_ctl_handle_event(struct tb_ctl *ctl, enum\n> tb_cfg_pkg_type type,\n> +static bool tb_ctl_handle_event(struct tb_ctl *ctl, enum\n> tb_cfg_pkg_type type,\n>  \t\t\t\tstruct ctl_pkg *pkg, size_t size)\n>  {\n> -\tctl->callback(ctl->callback_data, type, pkg->buffer, size);\n> +\treturn ctl->callback(ctl->callback_data, type, pkg->buffer,\n> size);\n>  }\n>  \n>  static void tb_ctl_rx_submit(struct ctl_pkg *pkg)\n> @@ -444,6 +444,8 @@ static void tb_ctl_rx_callback(struct tb_ring\n> *ring, struct ring_frame *frame,\n>  \t\tbreak;\n>  \n>  \tcase TB_CFG_PKG_EVENT:\n> +\tcase TB_CFG_PKG_XDOMAIN_RESP:\n> +\tcase TB_CFG_PKG_XDOMAIN_REQ:\n>  \t\tif (*(__be32 *)(pkg->buffer + frame->size) != crc32)\n> {\n>  \t\t\ttb_ctl_err(pkg->ctl,\n>  \t\t\t\t   \"RX: checksum mismatch, dropping\n> packet\\n\");\n> @@ -451,8 +453,9 @@ static void tb_ctl_rx_callback(struct tb_ring\n> *ring, struct ring_frame *frame,\n>  \t\t}\n>  \t\t/* Fall through */\n>  \tcase TB_CFG_PKG_ICM_EVENT:\n> -\t\ttb_ctl_handle_event(pkg->ctl, frame->eof, pkg,\n> frame->size);\n> -\t\tgoto rx;\n> +\t\tif (tb_ctl_handle_event(pkg->ctl, frame->eof, pkg,\n> frame->size))\n> +\t\t\tgoto rx;\n> +\t\tbreak;\n>  \n>  \tdefault:\n>  \t\tbreak;\n> diff --git a/drivers/thunderbolt/ctl.h b/drivers/thunderbolt/ctl.h\n> index d0f21e1e0b8b..85c49dd301ea 100644\n> --- a/drivers/thunderbolt/ctl.h\n> +++ b/drivers/thunderbolt/ctl.h\n> @@ -16,7 +16,7 @@\n>  /* control channel */\n>  struct tb_ctl;\n>  \n> -typedef void (*event_cb)(void *data, enum tb_cfg_pkg_type type,\n> +typedef bool (*event_cb)(void *data, enum tb_cfg_pkg_type type,\n>  \t\t\t const void *buf, size_t size);\n>  \n>  struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void\n> *cb_data);\n> diff --git a/drivers/thunderbolt/domain.c\n> b/drivers/thunderbolt/domain.c\n> index 9f2dcd48974d..29d6436ec8ce 100644\n> --- a/drivers/thunderbolt/domain.c\n> +++ b/drivers/thunderbolt/domain.c\n> @@ -20,6 +20,98 @@\n>  \n>  static DEFINE_IDA(tb_domain_ida);\n>  \n> +static bool match_service_id(const struct tb_service_id *id,\n> +\t\t\t     const struct tb_service *svc)\n> +{\n> +\tif (id->match_flags & TBSVC_MATCH_PROTOCOL_KEY) {\n> +\t\tif (strcmp(id->protocol_key, svc->key))\n> +\t\t\treturn false;\n> +\t}\n> +\n> +\tif (id->match_flags & TBSVC_MATCH_PROTOCOL_ID) {\n> +\t\tif (id->protocol_id != svc->prtcid)\n> +\t\t\treturn false;\n> +\t}\n> +\n> +\tif (id->match_flags & TBSVC_MATCH_PROTOCOL_VERSION) {\n> +\t\tif (id->protocol_version != svc->prtcvers)\n> +\t\t\treturn false;\n> +\t}\n> +\n> +\tif (id->match_flags & TBSVC_MATCH_PROTOCOL_VERSION) {\n> +\t\tif (id->protocol_revision != svc->prtcrevs)\n> +\t\t\treturn false;\n> +\t}\n> +\n> +\treturn true;\n> +}\n> +\n> +static const struct tb_service_id *__tb_service_match(struct device\n> *dev,\n> +\t\t\t\t\t\t      struct\n> device_driver *drv)\n> +{\n> +\tstruct tb_service_driver *driver;\n> +\tconst struct tb_service_id *ids;\n> +\tstruct tb_service *svc;\n> +\n> +\tsvc = tb_to_service(dev);\n> +\tif (!svc)\n> +\t\treturn NULL;\n> +\n> +\tdriver = container_of(drv, struct tb_service_driver,\n> driver);\n> +\tif (!driver->id_table)\n> +\t\treturn NULL;\n> +\n> +\tfor (ids = driver->id_table; ids->match_flags != 0; ids++) {\n> +\t\tif (match_service_id(ids, svc))\n> +\t\t\treturn ids;\n> +\t}\n> +\n> +\treturn NULL;\n> +}\n> +\n> +static int tb_service_match(struct device *dev, struct device_driver\n> *drv)\n> +{\n> +\treturn !!__tb_service_match(dev, drv);\n> +}\n> +\n> +static int tb_service_probe(struct device *dev)\n> +{\n> +\tstruct tb_service *svc = tb_to_service(dev);\n> +\tstruct tb_service_driver *driver;\n> +\tconst struct tb_service_id *id;\n> +\n> +\tdriver = container_of(dev->driver, struct tb_service_driver,\n> driver);\n> +\tid = __tb_service_match(dev, &driver->driver);\n> +\n> +\treturn driver->probe(svc, id);\n\nCould you pass 'dev' to the probe function so that things like the\nnetwork sub-driver can sensibly link the netdev to the parent hardware\nin sysfs with SET_NETDEV_DEV()?\n\nDan\n\n> +}\n> +\n> +static int tb_service_remove(struct device *dev)\n> +{\n> +\tstruct tb_service *svc = tb_to_service(dev);\n> +\tstruct tb_service_driver *driver;\n> +\n> +\tdriver = container_of(dev->driver, struct tb_service_driver,\n> driver);\n> +\tif (driver->remove)\n> +\t\tdriver->remove(svc);\n> +\n> +\treturn 0;\n> +}\n> +\n> +static void tb_service_shutdown(struct device *dev)\n> +{\n> +\tstruct tb_service_driver *driver;\n> +\tstruct tb_service *svc;\n> +\n> +\tsvc = tb_to_service(dev);\n> +\tif (!svc || !dev->driver)\n> +\t\treturn;\n> +\n> +\tdriver = container_of(dev->driver, struct tb_service_driver,\n> driver);\n> +\tif (driver->shutdown)\n> +\t\tdriver->shutdown(svc);\n> +}\n> +\n>  static const char * const tb_security_names[] = {\n>  \t[TB_SECURITY_NONE] = \"none\",\n>  \t[TB_SECURITY_USER] = \"user\",\n> @@ -52,6 +144,10 @@ static const struct attribute_group\n> *domain_attr_groups[] = {\n>  \n>  struct bus_type tb_bus_type = {\n>  \t.name = \"thunderbolt\",\n> +\t.match = tb_service_match,\n> +\t.probe = tb_service_probe,\n> +\t.remove = tb_service_remove,\n> +\t.shutdown = tb_service_shutdown,\n>  };\n>  \n>  static void tb_domain_release(struct device *dev)\n> @@ -128,17 +224,26 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi,\n> size_t privsize)\n>  \treturn NULL;\n>  }\n>  \n> -static void tb_domain_event_cb(void *data, enum tb_cfg_pkg_type\n> type,\n> +static bool tb_domain_event_cb(void *data, enum tb_cfg_pkg_type\n> type,\n>  \t\t\t       const void *buf, size_t size)\n>  {\n>  \tstruct tb *tb = data;\n>  \n>  \tif (!tb->cm_ops->handle_event) {\n>  \t\ttb_warn(tb, \"domain does not have event handler\\n\");\n> -\t\treturn;\n> +\t\treturn true;\n>  \t}\n>  \n> -\ttb->cm_ops->handle_event(tb, type, buf, size);\n> +\tswitch (type) {\n> +\tcase TB_CFG_PKG_XDOMAIN_REQ:\n> +\tcase TB_CFG_PKG_XDOMAIN_RESP:\n> +\t\treturn tb_xdomain_handle_request(tb, type, buf,\n> size);\n> +\n> +\tdefault:\n> +\t\ttb->cm_ops->handle_event(tb, type, buf, size);\n> +\t}\n> +\n> +\treturn true;\n>  }\n>  \n>  /**\n> @@ -443,9 +548,92 @@ int tb_domain_disconnect_pcie_paths(struct tb\n> *tb)\n>  \treturn tb->cm_ops->disconnect_pcie_paths(tb);\n>  }\n>  \n> +/**\n> + * tb_domain_approve_xdomain_paths() - Enable DMA paths for XDomain\n> + * @tb: Domain enabling the DMA paths\n> + * @xd: XDomain DMA paths are created to\n> + *\n> + * Calls connection manager specific method to enable DMA paths to\n> the\n> + * XDomain in question.\n> + *\n> + * Return: 0% in case of success and negative errno otherwise. In\n> + * particular returns %-ENOTSUPP if the connection manager\n> + * implementation does not support XDomains.\n> + */\n> +int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain\n> *xd)\n> +{\n> +\tif (!tb->cm_ops->approve_xdomain_paths)\n> +\t\treturn -ENOTSUPP;\n> +\n> +\treturn tb->cm_ops->approve_xdomain_paths(tb, xd);\n> +}\n> +\n> +/**\n> + * tb_domain_disconnect_xdomain_paths() - Disable DMA paths for\n> XDomain\n> + * @tb: Domain disabling the DMA paths\n> + * @xd: XDomain whose DMA paths are disconnected\n> + *\n> + * Calls connection manager specific method to disconnect DMA paths\n> to\n> + * the XDomain in question.\n> + *\n> + * Return: 0% in case of success and negative errno otherwise. In\n> + * particular returns %-ENOTSUPP if the connection manager\n> + * implementation does not support XDomains.\n> + */\n> +int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct\n> tb_xdomain *xd)\n> +{\n> +\tif (!tb->cm_ops->disconnect_xdomain_paths)\n> +\t\treturn -ENOTSUPP;\n> +\n> +\treturn tb->cm_ops->disconnect_xdomain_paths(tb, xd);\n> +}\n> +\n> +static int disconnect_xdomain(struct device *dev, void *data)\n> +{\n> +\tstruct tb_xdomain *xd;\n> +\tstruct tb *tb = data;\n> +\tint ret = 0;\n> +\n> +\txd = tb_to_xdomain(dev);\n> +\tif (xd && xd->tb == tb)\n> +\t\tret = tb_xdomain_disable_paths(xd);\n> +\n> +\treturn ret;\n> +}\n> +\n> +/**\n> + * tb_domain_disconnect_all_paths() - Disconnect all paths for the\n> domain\n> + * @tb: Domain whose paths are disconnected\n> + *\n> + * This function can be used to disconnect all paths (PCIe, XDomain)\n> for\n> + * example in preparation for host NVM firmware upgrade. After this\n> is\n> + * called the paths cannot be established without reseting the\n> switch.\n> + *\n> + * Return: %0 in case of success and negative errno otherwise.\n> + */\n> +int tb_domain_disconnect_all_paths(struct tb *tb)\n> +{\n> +\tint ret;\n> +\n> +\tret = tb_domain_disconnect_pcie_paths(tb);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\treturn bus_for_each_dev(&tb_bus_type, NULL, tb,\n> disconnect_xdomain);\n> +}\n> +\n>  int tb_domain_init(void)\n>  {\n> -\treturn bus_register(&tb_bus_type);\n> +\tint ret;\n> +\n> +\tret = tb_xdomain_init();\n> +\tif (ret)\n> +\t\treturn ret;\n> +\tret = bus_register(&tb_bus_type);\n> +\tif (ret)\n> +\t\ttb_xdomain_exit();\n> +\n> +\treturn ret;\n>  }\n>  \n>  void tb_domain_exit(void)\n> @@ -453,4 +641,5 @@ void tb_domain_exit(void)\n>  \tbus_unregister(&tb_bus_type);\n>  \tida_destroy(&tb_domain_ida);\n>  \ttb_switch_exit();\n> +\ttb_xdomain_exit();\n>  }\n> diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c\n> index 8c22b91ed040..ab02d13f40b7 100644\n> --- a/drivers/thunderbolt/icm.c\n> +++ b/drivers/thunderbolt/icm.c\n> @@ -60,6 +60,8 @@\n>   * @get_route: Find a route string for given switch\n>   * @device_connected: Handle device connected ICM message\n>   * @device_disconnected: Handle device disconnected ICM message\n> + * @xdomain_connected - Handle XDomain connected ICM message\n> + * @xdomain_disconnected - Handle XDomain disconnected ICM message\n>   */\n>  struct icm {\n>  \tstruct mutex request_lock;\n> @@ -74,6 +76,10 @@ struct icm {\n>  \t\t\t\t const struct icm_pkg_header *hdr);\n>  \tvoid (*device_disconnected)(struct tb *tb,\n>  \t\t\t\t    const struct icm_pkg_header\n> *hdr);\n> +\tvoid (*xdomain_connected)(struct tb *tb,\n> +\t\t\t\t  const struct icm_pkg_header *hdr);\n> +\tvoid (*xdomain_disconnected)(struct tb *tb,\n> +\t\t\t\t     const struct icm_pkg_header\n> *hdr);\n>  };\n>  \n>  struct icm_notification {\n> @@ -89,7 +95,10 @@ static inline struct tb *icm_to_tb(struct icm\n> *icm)\n>  \n>  static inline u8 phy_port_from_route(u64 route, u8 depth)\n>  {\n> -\treturn tb_phy_port_from_link(route >> ((depth - 1) * 8));\n> +\tu8 link;\n> +\n> +\tlink = depth ? route >> ((depth - 1) * 8) : route;\n> +\treturn tb_phy_port_from_link(link);\n>  }\n>  \n>  static inline u8 dual_link_from_link(u8 link)\n> @@ -320,6 +329,51 @@ static int icm_fr_challenge_switch_key(struct tb\n> *tb, struct tb_switch *sw,\n>  \treturn 0;\n>  }\n>  \n> +static int icm_fr_approve_xdomain_paths(struct tb *tb, struct\n> tb_xdomain *xd)\n> +{\n> +\tstruct icm_fr_pkg_approve_xdomain_response reply;\n> +\tstruct icm_fr_pkg_approve_xdomain request;\n> +\tint ret;\n> +\n> +\tmemset(&request, 0, sizeof(request));\n> +\trequest.hdr.code = ICM_APPROVE_XDOMAIN;\n> +\trequest.link_info = xd->depth << ICM_LINK_INFO_DEPTH_SHIFT |\n> xd->link;\n> +\tmemcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd-\n> >remote_uuid));\n> +\n> +\trequest.transmit_path = xd->transmit_path;\n> +\trequest.transmit_ring = xd->transmit_ring;\n> +\trequest.receive_path = xd->receive_path;\n> +\trequest.receive_ring = xd->receive_ring;\n> +\n> +\tmemset(&reply, 0, sizeof(reply));\n> +\tret = icm_request(tb, &request, sizeof(request), &reply,\n> sizeof(reply),\n> +\t\t\t  1, ICM_TIMEOUT);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tif (reply.hdr.flags & ICM_FLAGS_ERROR)\n> +\t\treturn -EIO;\n> +\n> +\treturn 0;\n> +}\n> +\n> +static int icm_fr_disconnect_xdomain_paths(struct tb *tb, struct\n> tb_xdomain *xd)\n> +{\n> +\tu8 phy_port;\n> +\tu8 cmd;\n> +\n> +\tphy_port = tb_phy_port_from_link(xd->link);\n> +\tif (phy_port == 0)\n> +\t\tcmd = NHI_MAILBOX_DISCONNECT_PA;\n> +\telse\n> +\t\tcmd = NHI_MAILBOX_DISCONNECT_PB;\n> +\n> +\tnhi_mailbox_cmd(tb->nhi, cmd, 1);\n> +\tusleep_range(10, 50);\n> +\tnhi_mailbox_cmd(tb->nhi, cmd, 2);\n> +\treturn 0;\n> +}\n> +\n>  static void remove_switch(struct tb_switch *sw)\n>  {\n>  \tstruct tb_switch *parent_sw;\n> @@ -475,6 +529,141 @@ icm_fr_device_disconnected(struct tb *tb, const\n> struct icm_pkg_header *hdr)\n>  \ttb_switch_put(sw);\n>  }\n>  \n> +static void remove_xdomain(struct tb_xdomain *xd)\n> +{\n> +\tstruct tb_switch *sw;\n> +\n> +\tsw = tb_to_switch(xd->dev.parent);\n> +\ttb_port_at(xd->route, sw)->xdomain = NULL;\n> +\ttb_xdomain_remove(xd);\n> +}\n> +\n> +static void\n> +icm_fr_xdomain_connected(struct tb *tb, const struct icm_pkg_header\n> *hdr)\n> +{\n> +\tconst struct icm_fr_event_xdomain_connected *pkg =\n> +\t\t(const struct icm_fr_event_xdomain_connected *)hdr;\n> +\tstruct tb_xdomain *xd;\n> +\tstruct tb_switch *sw;\n> +\tu8 link, depth;\n> +\tbool approved;\n> +\tu64 route;\n> +\n> +\t/*\n> +\t * After NVM upgrade adding root switch device fails because\n> we\n> +\t * initiated reset. During that time ICM might still send\n> +\t * XDomain connected message which we ignore here.\n> +\t */\n> +\tif (!tb->root_switch)\n> +\t\treturn;\n> +\n> +\tlink = pkg->link_info & ICM_LINK_INFO_LINK_MASK;\n> +\tdepth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >>\n> +\t\tICM_LINK_INFO_DEPTH_SHIFT;\n> +\tapproved = pkg->link_info & ICM_LINK_INFO_APPROVED;\n> +\n> +\tif (link > ICM_MAX_LINK || depth > ICM_MAX_DEPTH) {\n> +\t\ttb_warn(tb, \"invalid topology %u.%u, ignoring\\n\",\n> link, depth);\n> +\t\treturn;\n> +\t}\n> +\n> +\troute = get_route(pkg->local_route_hi, pkg->local_route_lo);\n> +\n> +\txd = tb_xdomain_find_by_uuid(tb, &pkg->remote_uuid);\n> +\tif (xd) {\n> +\t\tu8 xd_phy_port, phy_port;\n> +\n> +\t\txd_phy_port = phy_port_from_route(xd->route, xd-\n> >depth);\n> +\t\tphy_port = phy_port_from_route(route, depth);\n> +\n> +\t\tif (xd->depth == depth && xd_phy_port == phy_port) {\n> +\t\t\txd->link = link;\n> +\t\t\txd->route = route;\n> +\t\t\txd->is_unplugged = false;\n> +\t\t\ttb_xdomain_put(xd);\n> +\t\t\treturn;\n> +\t\t}\n> +\n> +\t\t/*\n> +\t\t * If we find an existing XDomain connection remove\n> it\n> +\t\t * now. We need to go through login handshake and\n> +\t\t * everything anyway to be able to re-establish the\n> +\t\t * connection.\n> +\t\t */\n> +\t\tremove_xdomain(xd);\n> +\t\ttb_xdomain_put(xd);\n> +\t}\n> +\n> +\t/*\n> +\t * Look if there already exists an XDomain in the same place\n> +\t * than the new one and in that case remove it because it is\n> +\t * most likely another host that got disconnected.\n> +\t */\n> +\txd = tb_xdomain_find_by_link_depth(tb, link, depth);\n> +\tif (!xd) {\n> +\t\tu8 dual_link;\n> +\n> +\t\tdual_link = dual_link_from_link(link);\n> +\t\tif (dual_link)\n> +\t\t\txd = tb_xdomain_find_by_link_depth(tb,\n> dual_link,\n> +\t\t\t\t\t\t\t   depth);\n> +\t}\n> +\tif (xd) {\n> +\t\tremove_xdomain(xd);\n> +\t\ttb_xdomain_put(xd);\n> +\t}\n> +\n> +\t/*\n> +\t * If the user disconnected a switch during suspend and\n> +\t * connected another host to the same port, remove the\n> switch\n> +\t * first.\n> +\t */\n> +\tsw = get_switch_at_route(tb->root_switch, route);\n> +\tif (sw)\n> +\t\tremove_switch(sw);\n> +\n> +\tsw = tb_switch_find_by_link_depth(tb, link, depth);\n> +\tif (!sw) {\n> +\t\ttb_warn(tb, \"no switch exists at %u.%u, ignoring\\n\",\n> link,\n> +\t\t\tdepth);\n> +\t\treturn;\n> +\t}\n> +\n> +\txd = tb_xdomain_alloc(sw->tb, &sw->dev, route,\n> +\t\t\t      &pkg->local_uuid, &pkg->remote_uuid);\n> +\tif (!xd) {\n> +\t\ttb_switch_put(sw);\n> +\t\treturn;\n> +\t}\n> +\n> +\txd->link = link;\n> +\txd->depth = depth;\n> +\n> +\ttb_port_at(route, sw)->xdomain = xd;\n> +\n> +\ttb_xdomain_add(xd);\n> +\ttb_switch_put(sw);\n> +}\n> +\n> +static void\n> +icm_fr_xdomain_disconnected(struct tb *tb, const struct\n> icm_pkg_header *hdr)\n> +{\n> +\tconst struct icm_fr_event_xdomain_disconnected *pkg =\n> +\t\t(const struct icm_fr_event_xdomain_disconnected\n> *)hdr;\n> +\tstruct tb_xdomain *xd;\n> +\n> +\t/*\n> +\t * If the connection is through one or multiple devices, the\n> +\t * XDomain device is removed along with them so it is fine\n> if we\n> +\t * cannot find it here.\n> +\t */\n> +\txd = tb_xdomain_find_by_uuid(tb, &pkg->remote_uuid);\n> +\tif (xd) {\n> +\t\tremove_xdomain(xd);\n> +\t\ttb_xdomain_put(xd);\n> +\t}\n> +}\n> +\n>  static struct pci_dev *get_upstream_port(struct pci_dev *pdev)\n>  {\n>  \tstruct pci_dev *parent;\n> @@ -594,6 +783,12 @@ static void icm_handle_notification(struct\n> work_struct *work)\n>  \tcase ICM_EVENT_DEVICE_DISCONNECTED:\n>  \t\ticm->device_disconnected(tb, n->pkg);\n>  \t\tbreak;\n> +\tcase ICM_EVENT_XDOMAIN_CONNECTED:\n> +\t\ticm->xdomain_connected(tb, n->pkg);\n> +\t\tbreak;\n> +\tcase ICM_EVENT_XDOMAIN_DISCONNECTED:\n> +\t\ticm->xdomain_disconnected(tb, n->pkg);\n> +\t\tbreak;\n>  \t}\n>  \n>  \tmutex_unlock(&tb->lock);\n> @@ -927,6 +1122,10 @@ static void icm_unplug_children(struct\n> tb_switch *sw)\n>  \n>  \t\tif (tb_is_upstream_port(port))\n>  \t\t\tcontinue;\n> +\t\tif (port->xdomain) {\n> +\t\t\tport->xdomain->is_unplugged = true;\n> +\t\t\tcontinue;\n> +\t\t}\n>  \t\tif (!port->remote)\n>  \t\t\tcontinue;\n>  \n> @@ -943,6 +1142,13 @@ static void icm_free_unplugged_children(struct\n> tb_switch *sw)\n>  \n>  \t\tif (tb_is_upstream_port(port))\n>  \t\t\tcontinue;\n> +\n> +\t\tif (port->xdomain && port->xdomain->is_unplugged) {\n> +\t\t\ttb_xdomain_remove(port->xdomain);\n> +\t\t\tport->xdomain = NULL;\n> +\t\t\tcontinue;\n> +\t\t}\n> +\n>  \t\tif (!port->remote)\n>  \t\t\tcontinue;\n>  \n> @@ -1009,8 +1215,10 @@ static int icm_start(struct tb *tb)\n>  \ttb->root_switch->no_nvm_upgrade = x86_apple_machine;\n>  \n>  \tret = tb_switch_add(tb->root_switch);\n> -\tif (ret)\n> +\tif (ret) {\n>  \t\ttb_switch_put(tb->root_switch);\n> +\t\ttb->root_switch = NULL;\n> +\t}\n>  \n>  \treturn ret;\n>  }\n> @@ -1042,6 +1250,8 @@ static const struct tb_cm_ops icm_fr_ops = {\n>  \t.add_switch_key = icm_fr_add_switch_key,\n>  \t.challenge_switch_key = icm_fr_challenge_switch_key,\n>  \t.disconnect_pcie_paths = icm_disconnect_pcie_paths,\n> +\t.approve_xdomain_paths = icm_fr_approve_xdomain_paths,\n> +\t.disconnect_xdomain_paths = icm_fr_disconnect_xdomain_paths,\n>  };\n>  \n>  struct tb *icm_probe(struct tb_nhi *nhi)\n> @@ -1064,6 +1274,8 @@ struct tb *icm_probe(struct tb_nhi *nhi)\n>  \t\ticm->get_route = icm_fr_get_route;\n>  \t\ticm->device_connected = icm_fr_device_connected;\n>  \t\ticm->device_disconnected =\n> icm_fr_device_disconnected;\n> +\t\ticm->xdomain_connected = icm_fr_xdomain_connected;\n> +\t\ticm->xdomain_disconnected =\n> icm_fr_xdomain_disconnected;\n>  \t\ttb->cm_ops = &icm_fr_ops;\n>  \t\tbreak;\n>  \n> @@ -1077,6 +1289,8 @@ struct tb *icm_probe(struct tb_nhi *nhi)\n>  \t\ticm->get_route = icm_ar_get_route;\n>  \t\ticm->device_connected = icm_fr_device_connected;\n>  \t\ticm->device_disconnected =\n> icm_fr_device_disconnected;\n> +\t\ticm->xdomain_connected = icm_fr_xdomain_connected;\n> +\t\ticm->xdomain_disconnected =\n> icm_fr_xdomain_disconnected;\n>  \t\ttb->cm_ops = &icm_fr_ops;\n>  \t\tbreak;\n>  \t}\n> diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h\n> index 5b5bb2c436be..0e05828983db 100644\n> --- a/drivers/thunderbolt/nhi.h\n> +++ b/drivers/thunderbolt/nhi.h\n> @@ -157,6 +157,8 @@ enum nhi_mailbox_cmd {\n>  \tNHI_MAILBOX_SAVE_DEVS = 0x05,\n>  \tNHI_MAILBOX_DISCONNECT_PCIE_PATHS = 0x06,\n>  \tNHI_MAILBOX_DRV_UNLOADS = 0x07,\n> +\tNHI_MAILBOX_DISCONNECT_PA = 0x10,\n> +\tNHI_MAILBOX_DISCONNECT_PB = 0x11,\n>  \tNHI_MAILBOX_ALLOW_ALL_DEVS = 0x23,\n>  };\n>  \n> diff --git a/drivers/thunderbolt/switch.c\n> b/drivers/thunderbolt/switch.c\n> index 53f40c57df59..dfc357d33e1e 100644\n> --- a/drivers/thunderbolt/switch.c\n> +++ b/drivers/thunderbolt/switch.c\n> @@ -171,11 +171,11 @@ static int nvm_authenticate_host(struct\n> tb_switch *sw)\n>  \n>  \t/*\n>  \t * Root switch NVM upgrade requires that we disconnect the\n> -\t * existing PCIe paths first (in case it is not in safe mode\n> +\t * existing paths first (in case it is not in safe mode\n>  \t * already).\n>  \t */\n>  \tif (!sw->safe_mode) {\n> -\t\tret = tb_domain_disconnect_pcie_paths(sw->tb);\n> +\t\tret = tb_domain_disconnect_all_paths(sw->tb);\n>  \t\tif (ret)\n>  \t\t\treturn ret;\n>  \t\t/*\n> @@ -1363,6 +1363,9 @@ void tb_switch_remove(struct tb_switch *sw)\n>  \t\tif (sw->ports[i].remote)\n>  \t\t\ttb_switch_remove(sw->ports[i].remote->sw);\n>  \t\tsw->ports[i].remote = NULL;\n> +\t\tif (sw->ports[i].xdomain)\n> +\t\t\ttb_xdomain_remove(sw->ports[i].xdomain);\n> +\t\tsw->ports[i].xdomain = NULL;\n>  \t}\n>  \n>  \tif (!sw->is_unplugged)\n> diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h\n> index ea21d927bd09..74af9d4929ab 100644\n> --- a/drivers/thunderbolt/tb.h\n> +++ b/drivers/thunderbolt/tb.h\n> @@ -9,6 +9,7 @@\n>  \n>  #include <linux/nvmem-provider.h>\n>  #include <linux/pci.h>\n> +#include <linux/thunderbolt.h>\n>  #include <linux/uuid.h>\n>  \n>  #include \"tb_regs.h\"\n> @@ -109,14 +110,25 @@ struct tb_switch {\n>  \n>  /**\n>   * struct tb_port - a thunderbolt port, part of a tb_switch\n> + * @config: Cached port configuration read from registers\n> + * @sw: Switch the port belongs to\n> + * @remote: Remote port (%NULL if not connected)\n> + * @xdomain: Remote host (%NULL if not connected)\n> + * @cap_phy: Offset, zero if not found\n> + * @port: Port number on switch\n> + * @disabled: Disabled by eeprom\n> + * @dual_link_port: If the switch is connected using two ports,\n> points\n> + *\t\t    to the other port.\n> + * @link_nr: Is this primary or secondary port on the dual_link.\n>   */\n>  struct tb_port {\n>  \tstruct tb_regs_port_header config;\n>  \tstruct tb_switch *sw;\n> -\tstruct tb_port *remote; /* remote port, NULL if not\n> connected */\n> -\tint cap_phy; /* offset, zero if not found */\n> -\tu8 port; /* port number on switch */\n> -\tbool disabled; /* disabled by eeprom */\n> +\tstruct tb_port *remote;\n> +\tstruct tb_xdomain *xdomain;\n> +\tint cap_phy;\n> +\tu8 port;\n> +\tbool disabled;\n>  \tstruct tb_port *dual_link_port;\n>  \tu8 link_nr:1;\n>  };\n> @@ -189,6 +201,8 @@ struct tb_path {\n>   * @add_switch_key: Add key to switch\n>   * @challenge_switch_key: Challenge switch using key\n>   * @disconnect_pcie_paths: Disconnects PCIe paths before NVM update\n> + * @approve_xdomain_paths: Approve (establish) XDomain DMA paths\n> + * @disconnect_xdomain_paths: Disconnect XDomain DMA paths\n>   */\n>  struct tb_cm_ops {\n>  \tint (*driver_ready)(struct tb *tb);\n> @@ -205,6 +219,8 @@ struct tb_cm_ops {\n>  \tint (*challenge_switch_key)(struct tb *tb, struct tb_switch\n> *sw,\n>  \t\t\t\t    const u8 *challenge, u8\n> *response);\n>  \tint (*disconnect_pcie_paths)(struct tb *tb);\n> +\tint (*approve_xdomain_paths)(struct tb *tb, struct\n> tb_xdomain *xd);\n> +\tint (*disconnect_xdomain_paths)(struct tb *tb, struct\n> tb_xdomain *xd);\n>  };\n>  \n>  static inline void *tb_priv(struct tb *tb)\n> @@ -331,6 +347,8 @@ extern struct device_type tb_switch_type;\n>  int tb_domain_init(void);\n>  void tb_domain_exit(void);\n>  void tb_switch_exit(void);\n> +int tb_xdomain_init(void);\n> +void tb_xdomain_exit(void);\n>  \n>  struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize);\n>  int tb_domain_add(struct tb *tb);\n> @@ -343,6 +361,9 @@ int tb_domain_approve_switch(struct tb *tb,\n> struct tb_switch *sw);\n>  int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch\n> *sw);\n>  int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch\n> *sw);\n>  int tb_domain_disconnect_pcie_paths(struct tb *tb);\n> +int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain\n> *xd);\n> +int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct\n> tb_xdomain *xd);\n> +int tb_domain_disconnect_all_paths(struct tb *tb);\n>  \n>  static inline void tb_domain_put(struct tb *tb)\n>  {\n> @@ -422,4 +443,14 @@ static inline u64 tb_downstream_route(struct\n> tb_port *port)\n>  \t       | ((u64) port->port << (port->sw->config.depth * 8));\n>  }\n>  \n> +bool tb_xdomain_handle_request(struct tb *tb, enum tb_cfg_pkg_type\n> type,\n> +\t\t\t       const void *buf, size_t size);\n> +struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device\n> *parent,\n> +\t\t\t\t    u64 route, const uuid_t\n> *local_uuid,\n> +\t\t\t\t    const uuid_t *remote_uuid);\n> +void tb_xdomain_add(struct tb_xdomain *xd);\n> +void tb_xdomain_remove(struct tb_xdomain *xd);\n> +struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8\n> link,\n> +\t\t\t\t\t\t u8 depth);\n> +\n>  #endif\n> diff --git a/drivers/thunderbolt/tb_msgs.h\n> b/drivers/thunderbolt/tb_msgs.h\n> index fe3039b05da6..2a76908537a6 100644\n> --- a/drivers/thunderbolt/tb_msgs.h\n> +++ b/drivers/thunderbolt/tb_msgs.h\n> @@ -101,11 +101,14 @@ enum icm_pkg_code {\n>  \tICM_CHALLENGE_DEVICE = 0x5,\n>  \tICM_ADD_DEVICE_KEY = 0x6,\n>  \tICM_GET_ROUTE = 0xa,\n> +\tICM_APPROVE_XDOMAIN = 0x10,\n>  };\n>  \n>  enum icm_event_code {\n>  \tICM_EVENT_DEVICE_CONNECTED = 3,\n>  \tICM_EVENT_DEVICE_DISCONNECTED = 4,\n> +\tICM_EVENT_XDOMAIN_CONNECTED = 6,\n> +\tICM_EVENT_XDOMAIN_DISCONNECTED = 7,\n>  };\n>  \n>  struct icm_pkg_header {\n> @@ -188,6 +191,25 @@ struct icm_fr_event_device_disconnected {\n>  \tu16 link_info;\n>  } __packed;\n>  \n> +struct icm_fr_event_xdomain_connected {\n> +\tstruct icm_pkg_header hdr;\n> +\tu16 reserved;\n> +\tu16 link_info;\n> +\tuuid_t remote_uuid;\n> +\tuuid_t local_uuid;\n> +\tu32 local_route_hi;\n> +\tu32 local_route_lo;\n> +\tu32 remote_route_hi;\n> +\tu32 remote_route_lo;\n> +} __packed;\n> +\n> +struct icm_fr_event_xdomain_disconnected {\n> +\tstruct icm_pkg_header hdr;\n> +\tu16 reserved;\n> +\tu16 link_info;\n> +\tuuid_t remote_uuid;\n> +} __packed;\n> +\n>  struct icm_fr_pkg_add_device_key {\n>  \tstruct icm_pkg_header hdr;\n>  \tuuid_t ep_uuid;\n> @@ -224,6 +246,28 @@ struct icm_fr_pkg_challenge_device_response {\n>  \tu32 response[8];\n>  } __packed;\n>  \n> +struct icm_fr_pkg_approve_xdomain {\n> +\tstruct icm_pkg_header hdr;\n> +\tu16 reserved;\n> +\tu16 link_info;\n> +\tuuid_t remote_uuid;\n> +\tu16 transmit_path;\n> +\tu16 transmit_ring;\n> +\tu16 receive_path;\n> +\tu16 receive_ring;\n> +} __packed;\n> +\n> +struct icm_fr_pkg_approve_xdomain_response {\n> +\tstruct icm_pkg_header hdr;\n> +\tu16 reserved;\n> +\tu16 link_info;\n> +\tuuid_t remote_uuid;\n> +\tu16 transmit_path;\n> +\tu16 transmit_ring;\n> +\tu16 receive_path;\n> +\tu16 receive_ring;\n> +} __packed;\n> +\n>  /* Alpine Ridge only messages */\n>  \n>  struct icm_ar_pkg_get_route {\n> @@ -240,4 +284,83 @@ struct icm_ar_pkg_get_route_response {\n>  \tu32 route_lo;\n>  } __packed;\n>  \n> +/* XDomain messages */\n> +\n> +struct tb_xdomain_header {\n> +\tu32 route_hi;\n> +\tu32 route_lo;\n> +\tu32 length_sn;\n> +} __packed;\n> +\n> +#define TB_XDOMAIN_LENGTH_MASK\tGENMASK(5, 0)\n> +#define TB_XDOMAIN_SN_MASK\tGENMASK(28, 27)\n> +#define TB_XDOMAIN_SN_SHIFT\t27\n> +\n> +enum tb_xdp_type {\n> +\tUUID_REQUEST_OLD = 1,\n> +\tUUID_RESPONSE = 2,\n> +\tPROPERTIES_REQUEST,\n> +\tPROPERTIES_RESPONSE,\n> +\tPROPERTIES_CHANGED_REQUEST,\n> +\tPROPERTIES_CHANGED_RESPONSE,\n> +\tERROR_RESPONSE,\n> +\tUUID_REQUEST = 12,\n> +};\n> +\n> +struct tb_xdp_header {\n> +\tstruct tb_xdomain_header xd_hdr;\n> +\tuuid_t uuid;\n> +\tu32 type;\n> +} __packed;\n> +\n> +struct tb_xdp_properties {\n> +\tstruct tb_xdp_header hdr;\n> +\tuuid_t src_uuid;\n> +\tuuid_t dst_uuid;\n> +\tu16 offset;\n> +\tu16 reserved;\n> +} __packed;\n> +\n> +struct tb_xdp_properties_response {\n> +\tstruct tb_xdp_header hdr;\n> +\tuuid_t src_uuid;\n> +\tuuid_t dst_uuid;\n> +\tu16 offset;\n> +\tu16 data_length;\n> +\tu32 generation;\n> +\tu32 data[0];\n> +} __packed;\n> +\n> +/*\n> + * Max length of data array single XDomain property response is\n> allowed\n> + * to carry.\n> + */\n> +#define TB_XDP_PROPERTIES_MAX_DATA_LENGTH\t\\\n> +\t(((256 - 4 - sizeof(struct tb_xdp_properties_response))) /\n> 4)\n> +\n> +/* Maximum size of the total property block in dwords we allow */\n> +#define TB_XDP_PROPERTIES_MAX_LENGTH\t\t500\n> +\n> +struct tb_xdp_properties_changed {\n> +\tstruct tb_xdp_header hdr;\n> +\tuuid_t src_uuid;\n> +} __packed;\n> +\n> +struct tb_xdp_properties_changed_response {\n> +\tstruct tb_xdp_header hdr;\n> +} __packed;\n> +\n> +enum tb_xdp_error {\n> +\tERROR_SUCCESS,\n> +\tERROR_UNKNOWN_PACKET,\n> +\tERROR_UNKNOWN_DOMAIN,\n> +\tERROR_NOT_SUPPORTED,\n> +\tERROR_NOT_READY,\n> +};\n> +\n> +struct tb_xdp_error_response {\n> +\tstruct tb_xdp_header hdr;\n> +\tu32 error;\n> +} __packed;\n> +\n>  #endif\n> diff --git a/drivers/thunderbolt/xdomain.c\n> b/drivers/thunderbolt/xdomain.c\n> new file mode 100644\n> index 000000000000..1b929be8fdd6\n> --- /dev/null\n> +++ b/drivers/thunderbolt/xdomain.c\n> @@ -0,0 +1,1576 @@\n> +/*\n> + * Thunderbolt XDomain discovery protocol support\n> + *\n> + * Copyright (C) 2017, Intel Corporation\n> + * Authors: Michael Jamet <michael.jamet@intel.com>\n> + *          Mika Westerberg <mika.westerberg@linux.intel.com>\n> + *\n> + * This program is free software; you can redistribute it and/or\n> modify\n> + * it under the terms of the GNU General Public License version 2 as\n> + * published by the Free Software Foundation.\n> + */\n> +\n> +#include <linux/device.h>\n> +#include <linux/kmod.h>\n> +#include <linux/module.h>\n> +#include <linux/utsname.h>\n> +#include <linux/uuid.h>\n> +#include <linux/workqueue.h>\n> +\n> +#include \"tb.h\"\n> +\n> +#define XDOMAIN_DEFAULT_TIMEOUT\t\t\t5000 /* ms */\n> +#define XDOMAIN_PROPERTIES_RETRIES\t\t60\n> +#define XDOMAIN_PROPERTIES_CHANGED_RETRIES\t10\n> +\n> +struct xdomain_request_work {\n> +\tstruct work_struct work;\n> +\tstruct tb_xdp_header *pkg;\n> +\tstruct tb *tb;\n> +};\n> +\n> +/* Serializes access to the properties and protocol handlers below\n> */\n> +static DEFINE_MUTEX(xdomain_lock);\n> +\n> +/* Properties exposed to the remote domains */\n> +static struct tb_property_dir *xdomain_property_dir;\n> +static u32 *xdomain_property_block;\n> +static u32 xdomain_property_block_len;\n> +static u32 xdomain_property_block_gen;\n> +\n> +/* Additional protocol handlers */\n> +static LIST_HEAD(protocol_handlers);\n> +\n> +/* UUID for XDomain discovery protocol */\n> +static const uuid_t tb_xdp_uuid =\n> +\tUUID_INIT(0xb638d70e, 0x42ff, 0x40bb,\n> +\t\t  0x97, 0xc2, 0x90, 0xe2, 0xc0, 0xb2, 0xff, 0x07);\n> +\n> +static bool tb_xdomain_match(const struct tb_cfg_request *req,\n> +\t\t\t     const struct ctl_pkg *pkg)\n> +{\n> +\tswitch (pkg->frame.eof) {\n> +\tcase TB_CFG_PKG_ERROR:\n> +\t\treturn true;\n> +\n> +\tcase TB_CFG_PKG_XDOMAIN_RESP: {\n> +\t\tconst struct tb_xdp_header *res_hdr = pkg->buffer;\n> +\t\tconst struct tb_xdp_header *req_hdr = req->request;\n> +\t\tu8 req_seq, res_seq;\n> +\n> +\t\tif (pkg->frame.size < req->response_size / 4)\n> +\t\t\treturn false;\n> +\n> +\t\t/* Make sure route matches */\n> +\t\tif ((res_hdr->xd_hdr.route_hi & ~BIT(31)) !=\n> +\t\t     req_hdr->xd_hdr.route_hi)\n> +\t\t\treturn false;\n> +\t\tif ((res_hdr->xd_hdr.route_lo) != req_hdr-\n> >xd_hdr.route_lo)\n> +\t\t\treturn false;\n> +\n> +\t\t/* Then check that the sequence number matches */\n> +\t\tres_seq = res_hdr->xd_hdr.length_sn &\n> TB_XDOMAIN_SN_MASK;\n> +\t\tres_seq >>= TB_XDOMAIN_SN_SHIFT;\n> +\t\treq_seq = req_hdr->xd_hdr.length_sn &\n> TB_XDOMAIN_SN_MASK;\n> +\t\treq_seq >>= TB_XDOMAIN_SN_SHIFT;\n> +\t\tif (res_seq != req_seq)\n> +\t\t\treturn false;\n> +\n> +\t\t/* Check that the XDomain protocol matches */\n> +\t\tif (!uuid_equal(&res_hdr->uuid, &req_hdr->uuid))\n> +\t\t\treturn false;\n> +\n> +\t\treturn true;\n> +\t}\n> +\n> +\tdefault:\n> +\t\treturn false;\n> +\t}\n> +}\n> +\n> +static bool tb_xdomain_copy(struct tb_cfg_request *req,\n> +\t\t\t    const struct ctl_pkg *pkg)\n> +{\n> +\tmemcpy(req->response, pkg->buffer, req->response_size);\n> +\treq->result.err = 0;\n> +\treturn true;\n> +}\n> +\n> +static void response_ready(void *data)\n> +{\n> +\ttb_cfg_request_put(data);\n> +}\n> +\n> +static int __tb_xdomain_response(struct tb_ctl *ctl, const void\n> *response,\n> +\t\t\t\t size_t size, enum tb_cfg_pkg_type\n> type)\n> +{\n> +\tstruct tb_cfg_request *req;\n> +\n> +\treq = tb_cfg_request_alloc();\n> +\tif (!req)\n> +\t\treturn -ENOMEM;\n> +\n> +\treq->match = tb_xdomain_match;\n> +\treq->copy = tb_xdomain_copy;\n> +\treq->request = response;\n> +\treq->request_size = size;\n> +\treq->request_type = type;\n> +\n> +\treturn tb_cfg_request(ctl, req, response_ready, req);\n> +}\n> +\n> +/**\n> + * tb_xdomain_response() - Send a XDomain response message\n> + * @xd: XDomain to send the message\n> + * @response: Response to send\n> + * @size: Size of the response\n> + * @type: PDF type of the response\n> + *\n> + * This can be used to send a XDomain response message to the other\n> + * domain. No response for the message is expected.\n> + *\n> + * Return: %0 in case of success and negative errno in case of\n> failure\n> + */\n> +int tb_xdomain_response(struct tb_xdomain *xd, const void *response,\n> +\t\t\tsize_t size, enum tb_cfg_pkg_type type)\n> +{\n> +\treturn __tb_xdomain_response(xd->tb->ctl, response, size,\n> type);\n> +}\n> +EXPORT_SYMBOL_GPL(tb_xdomain_response);\n> +\n> +static int __tb_xdomain_request(struct tb_ctl *ctl, const void\n> *request,\n> +\tsize_t request_size, enum tb_cfg_pkg_type request_type, void\n> *response,\n> +\tsize_t response_size, enum tb_cfg_pkg_type response_type,\n> +\tunsigned int timeout_msec)\n> +{\n> +\tstruct tb_cfg_request *req;\n> +\tstruct tb_cfg_result res;\n> +\n> +\treq = tb_cfg_request_alloc();\n> +\tif (!req)\n> +\t\treturn -ENOMEM;\n> +\n> +\treq->match = tb_xdomain_match;\n> +\treq->copy = tb_xdomain_copy;\n> +\treq->request = request;\n> +\treq->request_size = request_size;\n> +\treq->request_type = request_type;\n> +\treq->response = response;\n> +\treq->response_size = response_size;\n> +\treq->response_type = response_type;\n> +\n> +\tres = tb_cfg_request_sync(ctl, req, timeout_msec);\n> +\n> +\ttb_cfg_request_put(req);\n> +\n> +\treturn res.err == 1 ? -EIO : res.err;\n> +}\n> +\n> +/**\n> + * tb_xdomain_request() - Send a XDomain request\n> + * @xd: XDomain to send the request\n> + * @request: Request to send\n> + * @request_size: Size of the request in bytes\n> + * @request_type: PDF type of the request\n> + * @response: Response is copied here\n> + * @response_size: Expected size of the response in bytes\n> + * @response_type: Expected PDF type of the response\n> + * @timeout_msec: Timeout in milliseconds to wait for the response\n> + *\n> + * This function can be used to send XDomain control channel\n> messages to\n> + * the other domain. The function waits until the response is\n> received\n> + * or when timeout triggers. Whichever comes first.\n> + *\n> + * Return: %0 in case of success and negative errno in case of\n> failure\n> + */\n> +int tb_xdomain_request(struct tb_xdomain *xd, const void *request,\n> +\tsize_t request_size, enum tb_cfg_pkg_type request_type,\n> +\tvoid *response, size_t response_size,\n> +\tenum tb_cfg_pkg_type response_type, unsigned int\n> timeout_msec)\n> +{\n> +\treturn __tb_xdomain_request(xd->tb->ctl, request,\n> request_size,\n> +\t\t\t\t    request_type, response,\n> response_size,\n> +\t\t\t\t    response_type, timeout_msec);\n> +}\n> +EXPORT_SYMBOL_GPL(tb_xdomain_request);\n> +\n> +static inline void tb_xdp_fill_header(struct tb_xdp_header *hdr, u64\n> route,\n> +\tu8 sequence, enum tb_xdp_type type, size_t size)\n> +{\n> +\tu32 length_sn;\n> +\n> +\tlength_sn = (size - sizeof(hdr->xd_hdr)) / 4;\n> +\tlength_sn |= (sequence << TB_XDOMAIN_SN_SHIFT) &\n> TB_XDOMAIN_SN_MASK;\n> +\n> +\thdr->xd_hdr.route_hi = upper_32_bits(route);\n> +\thdr->xd_hdr.route_lo = lower_32_bits(route);\n> +\thdr->xd_hdr.length_sn = length_sn;\n> +\thdr->type = type;\n> +\tmemcpy(&hdr->uuid, &tb_xdp_uuid, sizeof(tb_xdp_uuid));\n> +}\n> +\n> +static int tb_xdp_handle_error(const struct tb_xdp_header *hdr)\n> +{\n> +\tconst struct tb_xdp_error_response *error;\n> +\n> +\tif (hdr->type != ERROR_RESPONSE)\n> +\t\treturn 0;\n> +\n> +\terror = (const struct tb_xdp_error_response *)hdr;\n> +\n> +\tswitch (error->error) {\n> +\tcase ERROR_UNKNOWN_PACKET:\n> +\tcase ERROR_UNKNOWN_DOMAIN:\n> +\t\treturn -EIO;\n> +\tcase ERROR_NOT_SUPPORTED:\n> +\t\treturn -ENOTSUPP;\n> +\tcase ERROR_NOT_READY:\n> +\t\treturn -EAGAIN;\n> +\tdefault:\n> +\t\tbreak;\n> +\t}\n> +\n> +\treturn 0;\n> +}\n> +\n> +static int tb_xdp_error_response(struct tb_ctl *ctl, u64 route, u8\n> sequence,\n> +\t\t\t\t enum tb_xdp_error error)\n> +{\n> +\tstruct tb_xdp_error_response res;\n> +\n> +\tmemset(&res, 0, sizeof(res));\n> +\ttb_xdp_fill_header(&res.hdr, route, sequence,\n> ERROR_RESPONSE,\n> +\t\t\t   sizeof(res));\n> +\tres.error = error;\n> +\n> +\treturn __tb_xdomain_response(ctl, &res, sizeof(res),\n> +\t\t\t\t     TB_CFG_PKG_XDOMAIN_RESP);\n> +}\n> +\n> +static int tb_xdp_properties_request(struct tb_ctl *ctl, u64 route,\n> +\tconst uuid_t *src_uuid, const uuid_t *dst_uuid, int retry,\n> +\tu32 **block, u32 *generation)\n> +{\n> +\tstruct tb_xdp_properties_response *res;\n> +\tstruct tb_xdp_properties req;\n> +\tu16 data_len, len;\n> +\tsize_t total_size;\n> +\tu32 *data = NULL;\n> +\tint ret;\n> +\n> +\ttotal_size = sizeof(*res) +\n> TB_XDP_PROPERTIES_MAX_DATA_LENGTH * 4;\n> +\tres = kzalloc(total_size, GFP_KERNEL);\n> +\tif (!res)\n> +\t\treturn -ENOMEM;\n> +\n> +\tmemset(&req, 0, sizeof(req));\n> +\ttb_xdp_fill_header(&req.hdr, route, retry % 4,\n> PROPERTIES_REQUEST,\n> +\t\t\t   sizeof(req));\n> +\tmemcpy(&req.src_uuid, src_uuid, sizeof(*src_uuid));\n> +\tmemcpy(&req.dst_uuid, dst_uuid, sizeof(*dst_uuid));\n> +\n> +\tlen = 0;\n> +\tdata_len = 0;\n> +\n> +\tdo {\n> +\t\tret = __tb_xdomain_request(ctl, &req, sizeof(req),\n> +\t\t\t\t\t   TB_CFG_PKG_XDOMAIN_REQ,\n> res,\n> +\t\t\t\t\t   total_size,\n> TB_CFG_PKG_XDOMAIN_RESP,\n> +\t\t\t\t\t   XDOMAIN_DEFAULT_TIMEOUT);\n> +\t\tif (ret)\n> +\t\t\tgoto err;\n> +\n> +\t\tret = tb_xdp_handle_error(&res->hdr);\n> +\t\tif (ret)\n> +\t\t\tgoto err;\n> +\n> +\t\t/*\n> +\t\t * Package length includes the whole payload without\n> the\n> +\t\t * XDomain header. Validate first that the package\n> is at\n> +\t\t * least size of the response structure.\n> +\t\t */\n> +\t\tlen = res->hdr.xd_hdr.length_sn &\n> TB_XDOMAIN_LENGTH_MASK;\n> +\t\tif (len < sizeof(*res) / 4) {\n> +\t\t\tret = -EINVAL;\n> +\t\t\tgoto err;\n> +\t\t}\n> +\n> +\t\tlen += sizeof(res->hdr.xd_hdr) / 4;\n> +\t\tlen -= sizeof(*res) / 4;\n> +\n> +\t\tif (res->offset != req.offset) {\n> +\t\t\tret = -EINVAL;\n> +\t\t\tgoto err;\n> +\t\t}\n> +\n> +\t\t/*\n> +\t\t * First time allocate block that has enough space\n> for\n> +\t\t * the whole properties block.\n> +\t\t */\n> +\t\tif (!data) {\n> +\t\t\tdata_len = res->data_length;\n> +\t\t\tif (data_len > TB_XDP_PROPERTIES_MAX_LENGTH)\n> {\n> +\t\t\t\tret = -E2BIG;\n> +\t\t\t\tgoto err;\n> +\t\t\t}\n> +\n> +\t\t\tdata = kcalloc(data_len, sizeof(u32),\n> GFP_KERNEL);\n> +\t\t\tif (!data) {\n> +\t\t\t\tret = -ENOMEM;\n> +\t\t\t\tgoto err;\n> +\t\t\t}\n> +\t\t}\n> +\n> +\t\tmemcpy(data + req.offset, res->data, len * 4);\n> +\t\treq.offset += len;\n> +\t} while (!data_len || req.offset < data_len);\n> +\n> +\t*block = data;\n> +\t*generation = res->generation;\n> +\n> +\tkfree(res);\n> +\n> +\treturn data_len;\n> +\n> +err:\n> +\tkfree(data);\n> +\tkfree(res);\n> +\n> +\treturn ret;\n> +}\n> +\n> +static int tb_xdp_properties_response(struct tb *tb, struct tb_ctl\n> *ctl,\n> +\tu64 route, u8 sequence, const uuid_t *src_uuid,\n> +\tconst struct tb_xdp_properties *req)\n> +{\n> +\tstruct tb_xdp_properties_response *res;\n> +\tsize_t total_size;\n> +\tu16 len;\n> +\tint ret;\n> +\n> +\t/*\n> +\t * Currently we expect all requests to be directed to us.\n> The\n> +\t * protocol supports forwarding, though which we might add\n> +\t * support later on.\n> +\t */\n> +\tif (!uuid_equal(src_uuid, &req->dst_uuid)) {\n> +\t\ttb_xdp_error_response(ctl, route, sequence,\n> +\t\t\t\t      ERROR_UNKNOWN_DOMAIN);\n> +\t\treturn 0;\n> +\t}\n> +\n> +\tmutex_lock(&xdomain_lock);\n> +\n> +\tif (req->offset >= xdomain_property_block_len) {\n> +\t\tmutex_unlock(&xdomain_lock);\n> +\t\treturn -EINVAL;\n> +\t}\n> +\n> +\tlen = xdomain_property_block_len - req->offset;\n> +\tlen = min_t(u16, len, TB_XDP_PROPERTIES_MAX_DATA_LENGTH);\n> +\ttotal_size = sizeof(*res) + len * 4;\n> +\n> +\tres = kzalloc(total_size, GFP_KERNEL);\n> +\tif (!res) {\n> +\t\tmutex_unlock(&xdomain_lock);\n> +\t\treturn -ENOMEM;\n> +\t}\n> +\n> +\ttb_xdp_fill_header(&res->hdr, route, sequence,\n> PROPERTIES_RESPONSE,\n> +\t\t\t   total_size);\n> +\tres->generation = xdomain_property_block_gen;\n> +\tres->data_length = xdomain_property_block_len;\n> +\tres->offset = req->offset;\n> +\tuuid_copy(&res->src_uuid, src_uuid);\n> +\tuuid_copy(&res->dst_uuid, &req->src_uuid);\n> +\tmemcpy(res->data, &xdomain_property_block[req->offset], len\n> * 4);\n> +\n> +\tmutex_unlock(&xdomain_lock);\n> +\n> +\tret = __tb_xdomain_response(ctl, res, total_size,\n> +\t\t\t\t    TB_CFG_PKG_XDOMAIN_RESP);\n> +\n> +\tkfree(res);\n> +\treturn ret;\n> +}\n> +\n> +static int tb_xdp_properties_changed_request(struct tb_ctl *ctl, u64\n> route,\n> +\t\t\t\t\t     int retry, const uuid_t\n> *uuid)\n> +{\n> +\tstruct tb_xdp_properties_changed_response res;\n> +\tstruct tb_xdp_properties_changed req;\n> +\tint ret;\n> +\n> +\tmemset(&req, 0, sizeof(req));\n> +\ttb_xdp_fill_header(&req.hdr, route, retry % 4,\n> +\t\t\t   PROPERTIES_CHANGED_REQUEST, sizeof(req));\n> +\tuuid_copy(&req.src_uuid, uuid);\n> +\n> +\tmemset(&res, 0, sizeof(res));\n> +\tret = __tb_xdomain_request(ctl, &req, sizeof(req),\n> +\t\t\t\t   TB_CFG_PKG_XDOMAIN_REQ, &res,\n> sizeof(res),\n> +\t\t\t\t   TB_CFG_PKG_XDOMAIN_RESP,\n> +\t\t\t\t   XDOMAIN_DEFAULT_TIMEOUT);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\treturn tb_xdp_handle_error(&res.hdr);\n> +}\n> +\n> +static int\n> +tb_xdp_properties_changed_response(struct tb_ctl *ctl, u64 route, u8\n> sequence)\n> +{\n> +\tstruct tb_xdp_properties_changed_response res;\n> +\n> +\tmemset(&res, 0, sizeof(res));\n> +\ttb_xdp_fill_header(&res.hdr, route, sequence,\n> +\t\t\t   PROPERTIES_CHANGED_RESPONSE,\n> sizeof(res));\n> +\treturn __tb_xdomain_response(ctl, &res, sizeof(res),\n> +\t\t\t\t     TB_CFG_PKG_XDOMAIN_RESP);\n> +}\n> +\n> +/**\n> + * tb_register_protocol_handler() - Register protocol handler\n> + * @handler: Handler to register\n> + *\n> + * This allows XDomain service drivers to hook into incoming XDomain\n> + * messages. After this function is called the service driver needs\n> to\n> + * be able to handle calls to callback whenever a package with the\n> + * registered protocol is received.\n> + */\n> +int tb_register_protocol_handler(struct tb_protocol_handler\n> *handler)\n> +{\n> +\tif (!handler->uuid || !handler->callback)\n> +\t\treturn -EINVAL;\n> +\tif (uuid_equal(handler->uuid, &tb_xdp_uuid))\n> +\t\treturn -EINVAL;\n> +\n> +\tmutex_lock(&xdomain_lock);\n> +\tlist_add_tail(&handler->list, &protocol_handlers);\n> +\tmutex_unlock(&xdomain_lock);\n> +\n> +\treturn 0;\n> +}\n> +EXPORT_SYMBOL_GPL(tb_register_protocol_handler);\n> +\n> +/**\n> + * tb_unregister_protocol_handler() - Unregister protocol handler\n> + * @handler: Handler to unregister\n> + *\n> + * Removes the previously registered protocol handler.\n> + */\n> +void tb_unregister_protocol_handler(struct tb_protocol_handler\n> *handler)\n> +{\n> +\tmutex_lock(&xdomain_lock);\n> +\tlist_del_init(&handler->list);\n> +\tmutex_unlock(&xdomain_lock);\n> +}\n> +EXPORT_SYMBOL_GPL(tb_unregister_protocol_handler);\n> +\n> +static void tb_xdp_handle_request(struct work_struct *work)\n> +{\n> +\tstruct xdomain_request_work *xw = container_of(work,\n> typeof(*xw), work);\n> +\tconst struct tb_xdp_header *pkg = xw->pkg;\n> +\tconst struct tb_xdomain_header *xhdr = &pkg->xd_hdr;\n> +\tstruct tb *tb = xw->tb;\n> +\tstruct tb_ctl *ctl = tb->ctl;\n> +\tconst uuid_t *uuid;\n> +\tint ret = 0;\n> +\tu8 sequence;\n> +\tu64 route;\n> +\n> +\troute = ((u64)xhdr->route_hi << 32 | xhdr->route_lo) &\n> ~BIT_ULL(63);\n> +\tsequence = xhdr->length_sn & TB_XDOMAIN_SN_MASK;\n> +\tsequence >>= TB_XDOMAIN_SN_SHIFT;\n> +\n> +\tmutex_lock(&tb->lock);\n> +\tif (tb->root_switch)\n> +\t\tuuid = tb->root_switch->uuid;\n> +\telse\n> +\t\tuuid = NULL;\n> +\tmutex_unlock(&tb->lock);\n> +\n> +\tif (!uuid) {\n> +\t\ttb_xdp_error_response(ctl, route, sequence,\n> ERROR_NOT_READY);\n> +\t\tgoto out;\n> +\t}\n> +\n> +\tswitch (pkg->type) {\n> +\tcase PROPERTIES_REQUEST:\n> +\t\tret = tb_xdp_properties_response(tb, ctl, route,\n> sequence, uuid,\n> +\t\t\t(const struct tb_xdp_properties *)pkg);\n> +\t\tbreak;\n> +\n> +\tcase PROPERTIES_CHANGED_REQUEST: {\n> +\t\tconst struct tb_xdp_properties_changed *xchg =\n> +\t\t\t(const struct tb_xdp_properties_changed\n> *)pkg;\n> +\t\tstruct tb_xdomain *xd;\n> +\n> +\t\tret = tb_xdp_properties_changed_response(ctl, route,\n> sequence);\n> +\n> +\t\t/*\n> +\t\t * Since the properties have been changed, let's\n> update\n> +\t\t * the xdomain related to this connection as well in\n> +\t\t * case there is a change in services it offers.\n> +\t\t */\n> +\t\txd = tb_xdomain_find_by_uuid_locked(tb, &xchg-\n> >src_uuid);\n> +\t\tif (xd) {\n> +\t\t\tqueue_delayed_work(tb->wq, &xd-\n> >get_properties_work,\n> +\t\t\t\t\t   msecs_to_jiffies(50));\n> +\t\t\ttb_xdomain_put(xd);\n> +\t\t}\n> +\n> +\t\tbreak;\n> +\t}\n> +\n> +\tdefault:\n> +\t\tbreak;\n> +\t}\n> +\n> +\tif (ret) {\n> +\t\ttb_warn(tb, \"failed to send XDomain response for\n> %#x\\n\",\n> +\t\t\tpkg->type);\n> +\t}\n> +\n> +out:\n> +\tkfree(xw->pkg);\n> +\tkfree(xw);\n> +}\n> +\n> +static void\n> +tb_xdp_schedule_request(struct tb *tb, const struct tb_xdp_header\n> *hdr,\n> +\t\t\tsize_t size)\n> +{\n> +\tstruct xdomain_request_work *xw;\n> +\n> +\txw = kmalloc(sizeof(*xw), GFP_KERNEL);\n> +\tif (!xw)\n> +\t\treturn;\n> +\n> +\tINIT_WORK(&xw->work, tb_xdp_handle_request);\n> +\txw->pkg = kmemdup(hdr, size, GFP_KERNEL);\n> +\txw->tb = tb;\n> +\n> +\tqueue_work(tb->wq, &xw->work);\n> +}\n> +\n> +/**\n> + * tb_register_service_driver() - Register XDomain service driver\n> + * @drv: Driver to register\n> + *\n> + * Registers new service driver from @drv to the bus.\n> + */\n> +int tb_register_service_driver(struct tb_service_driver *drv)\n> +{\n> +\tdrv->driver.bus = &tb_bus_type;\n> +\treturn driver_register(&drv->driver);\n> +}\n> +EXPORT_SYMBOL_GPL(tb_register_service_driver);\n> +\n> +/**\n> + * tb_unregister_service_driver() - Unregister XDomain service\n> driver\n> + * @xdrv: Driver to unregister\n> + *\n> + * Unregisters XDomain service driver from the bus.\n> + */\n> +void tb_unregister_service_driver(struct tb_service_driver *drv)\n> +{\n> +\tdriver_unregister(&drv->driver);\n> +}\n> +EXPORT_SYMBOL_GPL(tb_unregister_service_driver);\n> +\n> +static ssize_t key_show(struct device *dev, struct device_attribute\n> *attr,\n> +\t\t\tchar *buf)\n> +{\n> +\tstruct tb_service *svc = container_of(dev, struct\n> tb_service, dev);\n> +\n> +\t/*\n> +\t * It should be null terminated but anything else is pretty\n> much\n> +\t * allowed.\n> +\t */\n> +\treturn sprintf(buf, \"%*pEp\\n\", (int)strlen(svc->key), svc-\n> >key);\n> +}\n> +static DEVICE_ATTR_RO(key);\n> +\n> +static int get_modalias(struct tb_service *svc, char *buf, size_t\n> size)\n> +{\n> +\treturn snprintf(buf, size, \"tbsvc:k%sp%08Xv%08Xr%08X\", svc-\n> >key,\n> +\t\t\tsvc->prtcid, svc->prtcvers, svc->prtcrevs);\n> +}\n> +\n> +static ssize_t modalias_show(struct device *dev, struct\n> device_attribute *attr,\n> +\t\t\t     char *buf)\n> +{\n> +\tstruct tb_service *svc = container_of(dev, struct\n> tb_service, dev);\n> +\n> +\t/* Full buffer size except new line and null termination */\n> +\tget_modalias(svc, buf, PAGE_SIZE - 2);\n> +\treturn sprintf(buf, \"%s\\n\", buf);\n> +}\n> +static DEVICE_ATTR_RO(modalias);\n> +\n> +static ssize_t prtcid_show(struct device *dev, struct\n> device_attribute *attr,\n> +\t\t\t   char *buf)\n> +{\n> +\tstruct tb_service *svc = container_of(dev, struct\n> tb_service, dev);\n> +\n> +\treturn sprintf(buf, \"%u\\n\", svc->prtcid);\n> +}\n> +static DEVICE_ATTR_RO(prtcid);\n> +\n> +static ssize_t prtcvers_show(struct device *dev, struct\n> device_attribute *attr,\n> +\t\t\t     char *buf)\n> +{\n> +\tstruct tb_service *svc = container_of(dev, struct\n> tb_service, dev);\n> +\n> +\treturn sprintf(buf, \"%u\\n\", svc->prtcvers);\n> +}\n> +static DEVICE_ATTR_RO(prtcvers);\n> +\n> +static ssize_t prtcrevs_show(struct device *dev, struct\n> device_attribute *attr,\n> +\t\t\t     char *buf)\n> +{\n> +\tstruct tb_service *svc = container_of(dev, struct\n> tb_service, dev);\n> +\n> +\treturn sprintf(buf, \"%u\\n\", svc->prtcrevs);\n> +}\n> +static DEVICE_ATTR_RO(prtcrevs);\n> +\n> +static ssize_t prtcstns_show(struct device *dev, struct\n> device_attribute *attr,\n> +\t\t\t     char *buf)\n> +{\n> +\tstruct tb_service *svc = container_of(dev, struct\n> tb_service, dev);\n> +\n> +\treturn sprintf(buf, \"0x%08x\\n\", svc->prtcstns);\n> +}\n> +static DEVICE_ATTR_RO(prtcstns);\n> +\n> +static struct attribute *tb_service_attrs[] = {\n> +\t&dev_attr_key.attr,\n> +\t&dev_attr_modalias.attr,\n> +\t&dev_attr_prtcid.attr,\n> +\t&dev_attr_prtcvers.attr,\n> +\t&dev_attr_prtcrevs.attr,\n> +\t&dev_attr_prtcstns.attr,\n> +\tNULL,\n> +};\n> +\n> +static struct attribute_group tb_service_attr_group = {\n> +\t.attrs = tb_service_attrs,\n> +};\n> +\n> +static const struct attribute_group *tb_service_attr_groups[] = {\n> +\t&tb_service_attr_group,\n> +\tNULL,\n> +};\n> +\n> +static int tb_service_uevent(struct device *dev, struct\n> kobj_uevent_env *env)\n> +{\n> +\tstruct tb_service *svc = container_of(dev, struct\n> tb_service, dev);\n> +\tchar modalias[64];\n> +\n> +\tget_modalias(svc, modalias, sizeof(modalias));\n> +\treturn add_uevent_var(env, \"MODALIAS=%s\", modalias);\n> +}\n> +\n> +static void tb_service_release(struct device *dev)\n> +{\n> +\tstruct tb_service *svc = container_of(dev, struct\n> tb_service, dev);\n> +\tstruct tb_xdomain *xd = tb_service_parent(svc);\n> +\n> +\tida_simple_remove(&xd->service_ids, svc->id);\n> +\tkfree(svc->key);\n> +\tkfree(svc);\n> +}\n> +\n> +struct device_type tb_service_type = {\n> +\t.name = \"thunderbolt_service\",\n> +\t.groups = tb_service_attr_groups,\n> +\t.uevent = tb_service_uevent,\n> +\t.release = tb_service_release,\n> +};\n> +EXPORT_SYMBOL_GPL(tb_service_type);\n> +\n> +static int remove_missing_service(struct device *dev, void *data)\n> +{\n> +\tstruct tb_xdomain *xd = data;\n> +\tstruct tb_service *svc;\n> +\n> +\tsvc = tb_to_service(dev);\n> +\tif (!svc)\n> +\t\treturn 0;\n> +\n> +\tif (!tb_property_find(xd->properties, svc->key,\n> +\t\t\t      TB_PROPERTY_TYPE_DIRECTORY))\n> +\t\tdevice_unregister(dev);\n> +\n> +\treturn 0;\n> +}\n> +\n> +static int find_service(struct device *dev, void *data)\n> +{\n> +\tconst struct tb_property *p = data;\n> +\tstruct tb_service *svc;\n> +\n> +\tsvc = tb_to_service(dev);\n> +\tif (!svc)\n> +\t\treturn 0;\n> +\n> +\treturn !strcmp(svc->key, p->key);\n> +}\n> +\n> +static int populate_service(struct tb_service *svc,\n> +\t\t\t    struct tb_property *property)\n> +{\n> +\tstruct tb_property_dir *dir = property->value.dir;\n> +\tstruct tb_property *p;\n> +\n> +\t/* Fill in standard properties */\n> +\tp = tb_property_find(dir, \"prtcid\", TB_PROPERTY_TYPE_VALUE);\n> +\tif (p)\n> +\t\tsvc->prtcid = p->value.immediate;\n> +\tp = tb_property_find(dir, \"prtcvers\",\n> TB_PROPERTY_TYPE_VALUE);\n> +\tif (p)\n> +\t\tsvc->prtcvers = p->value.immediate;\n> +\tp = tb_property_find(dir, \"prtcrevs\",\n> TB_PROPERTY_TYPE_VALUE);\n> +\tif (p)\n> +\t\tsvc->prtcrevs = p->value.immediate;\n> +\tp = tb_property_find(dir, \"prtcstns\",\n> TB_PROPERTY_TYPE_VALUE);\n> +\tif (p)\n> +\t\tsvc->prtcstns = p->value.immediate;\n> +\n> +\tsvc->key = kstrdup(property->key, GFP_KERNEL);\n> +\tif (!svc->key)\n> +\t\treturn -ENOMEM;\n> +\n> +\treturn 0;\n> +}\n> +\n> +static void enumerate_services(struct tb_xdomain *xd)\n> +{\n> +\tstruct tb_service *svc;\n> +\tstruct tb_property *p;\n> +\tstruct device *dev;\n> +\n> +\t/*\n> +\t * First remove all services that are not available anymore\n> in\n> +\t * the updated property block.\n> +\t */\n> +\tdevice_for_each_child_reverse(&xd->dev, xd,\n> remove_missing_service);\n> +\n> +\t/* Then re-enumerate properties creating new services as we\n> go */\n> +\ttb_property_for_each(xd->properties, p) {\n> +\t\tif (p->type != TB_PROPERTY_TYPE_DIRECTORY)\n> +\t\t\tcontinue;\n> +\n> +\t\t/* If the service exists already we are fine */\n> +\t\tdev = device_find_child(&xd->dev, p, find_service);\n> +\t\tif (dev) {\n> +\t\t\tput_device(dev);\n> +\t\t\tcontinue;\n> +\t\t}\n> +\n> +\t\tsvc = kzalloc(sizeof(*svc), GFP_KERNEL);\n> +\t\tif (!svc)\n> +\t\t\tbreak;\n> +\n> +\t\tif (populate_service(svc, p)) {\n> +\t\t\tkfree(svc);\n> +\t\t\tbreak;\n> +\t\t}\n> +\n> +\t\tsvc->id = ida_simple_get(&xd->service_ids, 0, 0,\n> GFP_KERNEL);\n> +\t\tsvc->dev.bus = &tb_bus_type;\n> +\t\tsvc->dev.type = &tb_service_type;\n> +\t\tsvc->dev.parent = &xd->dev;\n> +\t\tdev_set_name(&svc->dev, \"%s.%d\", dev_name(&xd->dev), \n> svc->id);\n> +\n> +\t\tif (device_register(&svc->dev)) {\n> +\t\t\tput_device(&svc->dev);\n> +\t\t\tbreak;\n> +\t\t}\n> +\t}\n> +}\n> +\n> +static int populate_properties(struct tb_xdomain *xd,\n> +\t\t\t       struct tb_property_dir *dir)\n> +{\n> +\tconst struct tb_property *p;\n> +\n> +\t/* Required properties */\n> +\tp = tb_property_find(dir, \"deviceid\",\n> TB_PROPERTY_TYPE_VALUE);\n> +\tif (!p)\n> +\t\treturn -EINVAL;\n> +\txd->device = p->value.immediate;\n> +\n> +\tp = tb_property_find(dir, \"vendorid\",\n> TB_PROPERTY_TYPE_VALUE);\n> +\tif (!p)\n> +\t\treturn -EINVAL;\n> +\txd->vendor = p->value.immediate;\n> +\n> +\tkfree(xd->device_name);\n> +\txd->device_name = NULL;\n> +\tkfree(xd->vendor_name);\n> +\txd->vendor_name = NULL;\n> +\n> +\t/* Optional properties */\n> +\tp = tb_property_find(dir, \"deviceid\",\n> TB_PROPERTY_TYPE_TEXT);\n> +\tif (p)\n> +\t\txd->device_name = kstrdup(p->value.text,\n> GFP_KERNEL);\n> +\tp = tb_property_find(dir, \"vendorid\",\n> TB_PROPERTY_TYPE_TEXT);\n> +\tif (p)\n> +\t\txd->vendor_name = kstrdup(p->value.text,\n> GFP_KERNEL);\n> +\n> +\treturn 0;\n> +}\n> +\n> +/* Called with @xd->lock held */\n> +static void tb_xdomain_restore_paths(struct tb_xdomain *xd)\n> +{\n> +\tif (!xd->resume)\n> +\t\treturn;\n> +\n> +\txd->resume = false;\n> +\tif (xd->transmit_path) {\n> +\t\tdev_dbg(&xd->dev, \"re-establishing DMA path\\n\");\n> +\t\ttb_domain_approve_xdomain_paths(xd->tb, xd);\n> +\t}\n> +}\n> +\n> +static void tb_xdomain_get_properties(struct work_struct *work)\n> +{\n> +\tstruct tb_xdomain *xd = container_of(work, typeof(*xd),\n> +\t\t\t\t\t     get_properties_work.wor\n> k);\n> +\tstruct tb_property_dir *dir;\n> +\tstruct tb *tb = xd->tb;\n> +\tbool update = false;\n> +\tu32 *block = NULL;\n> +\tu32 gen = 0;\n> +\tint ret;\n> +\n> +\tret = tb_xdp_properties_request(tb->ctl, xd->route, xd-\n> >local_uuid,\n> +\t\t\t\t\txd->remote_uuid, xd-\n> >properties_retries,\n> +\t\t\t\t\t&block, &gen);\n> +\tif (ret < 0) {\n> +\t\tif (xd->properties_retries-- > 0) {\n> +\t\t\tqueue_delayed_work(xd->tb->wq, &xd-\n> >get_properties_work,\n> +\t\t\t\t\t   msecs_to_jiffies(1000));\n> +\t\t} else {\n> +\t\t\t/* Give up now */\n> +\t\t\tdev_err(&xd->dev,\n> +\t\t\t\t\"failed read XDomain properties from\n> %pUb\\n\",\n> +\t\t\t\txd->remote_uuid);\n> +\t\t}\n> +\t\treturn;\n> +\t}\n> +\n> +\txd->properties_retries = XDOMAIN_PROPERTIES_RETRIES;\n> +\n> +\tmutex_lock(&xd->lock);\n> +\n> +\t/* Only accept newer generation properties */\n> +\tif (xd->properties && gen <= xd->property_block_gen) {\n> +\t\t/*\n> +\t\t * On resume it is likely that the properties block\n> is\n> +\t\t * not changed (unless the other end added or\n> removed\n> +\t\t * services). However, we need to make sure the\n> existing\n> +\t\t * DMA paths are restored properly.\n> +\t\t */\n> +\t\ttb_xdomain_restore_paths(xd);\n> +\t\tgoto err_free_block;\n> +\t}\n> +\n> +\tdir = tb_property_parse_dir(block, ret);\n> +\tif (!dir) {\n> +\t\tdev_err(&xd->dev, \"failed to parse XDomain\n> properties\\n\");\n> +\t\tgoto err_free_block;\n> +\t}\n> +\n> +\tret = populate_properties(xd, dir);\n> +\tif (ret) {\n> +\t\tdev_err(&xd->dev, \"missing XDomain properties in\n> response\\n\");\n> +\t\tgoto err_free_dir;\n> +\t}\n> +\n> +\t/* Release the existing one */\n> +\tif (xd->properties) {\n> +\t\ttb_property_free_dir(xd->properties);\n> +\t\tupdate = true;\n> +\t}\n> +\n> +\txd->properties = dir;\n> +\txd->property_block_gen = gen;\n> +\n> +\ttb_xdomain_restore_paths(xd);\n> +\n> +\tmutex_unlock(&xd->lock);\n> +\n> +\tkfree(block);\n> +\n> +\t/*\n> +\t * Now the device should be ready enough so we can add it to\n> the\n> +\t * bus and let userspace know about it. If the device is\n> already\n> +\t * registered, we notify the userspace that it has changed.\n> +\t */\n> +\tif (!update) {\n> +\t\tif (device_add(&xd->dev)) {\n> +\t\t\tdev_err(&xd->dev, \"failed to add XDomain\n> device\\n\");\n> +\t\t\treturn;\n> +\t\t}\n> +\t} else {\n> +\t\tkobject_uevent(&xd->dev.kobj, KOBJ_CHANGE);\n> +\t}\n> +\n> +\tenumerate_services(xd);\n> +\treturn;\n> +\n> +err_free_dir:\n> +\ttb_property_free_dir(dir);\n> +err_free_block:\n> +\tkfree(block);\n> +\tmutex_unlock(&xd->lock);\n> +}\n> +\n> +static void tb_xdomain_properties_changed(struct work_struct *work)\n> +{\n> +\tstruct tb_xdomain *xd = container_of(work, typeof(*xd),\n> +\t\t\t\t\t     properties_changed_work\n> .work);\n> +\tint ret;\n> +\n> +\tret = tb_xdp_properties_changed_request(xd->tb->ctl, xd-\n> >route,\n> +\t\t\t\txd->properties_changed_retries, xd-\n> >local_uuid);\n> +\tif (ret) {\n> +\t\tif (xd->properties_changed_retries-- > 0)\n> +\t\t\tqueue_delayed_work(xd->tb->wq,\n> +\t\t\t\t\t   &xd-\n> >properties_changed_work,\n> +\t\t\t\t\t   msecs_to_jiffies(1000));\n> +\t\treturn;\n> +\t}\n> +\n> +\txd->properties_changed_retries =\n> XDOMAIN_PROPERTIES_CHANGED_RETRIES;\n> +}\n> +\n> +static ssize_t device_show(struct device *dev, struct\n> device_attribute *attr,\n> +\t\t\t   char *buf)\n> +{\n> +\tstruct tb_xdomain *xd = container_of(dev, struct tb_xdomain,\n> dev);\n> +\n> +\treturn sprintf(buf, \"%#x\\n\", xd->device);\n> +}\n> +static DEVICE_ATTR_RO(device);\n> +\n> +static ssize_t\n> +device_name_show(struct device *dev, struct device_attribute *attr,\n> char *buf)\n> +{\n> +\tstruct tb_xdomain *xd = container_of(dev, struct tb_xdomain,\n> dev);\n> +\tint ret;\n> +\n> +\tif (mutex_lock_interruptible(&xd->lock))\n> +\t\treturn -ERESTARTSYS;\n> +\tret = sprintf(buf, \"%s\\n\", xd->device_name ? xd->device_name \n> : \"\");\n> +\tmutex_unlock(&xd->lock);\n> +\n> +\treturn ret;\n> +}\n> +static DEVICE_ATTR_RO(device_name);\n> +\n> +static ssize_t vendor_show(struct device *dev, struct\n> device_attribute *attr,\n> +\t\t\t   char *buf)\n> +{\n> +\tstruct tb_xdomain *xd = container_of(dev, struct tb_xdomain,\n> dev);\n> +\n> +\treturn sprintf(buf, \"%#x\\n\", xd->vendor);\n> +}\n> +static DEVICE_ATTR_RO(vendor);\n> +\n> +static ssize_t\n> +vendor_name_show(struct device *dev, struct device_attribute *attr,\n> char *buf)\n> +{\n> +\tstruct tb_xdomain *xd = container_of(dev, struct tb_xdomain,\n> dev);\n> +\tint ret;\n> +\n> +\tif (mutex_lock_interruptible(&xd->lock))\n> +\t\treturn -ERESTARTSYS;\n> +\tret = sprintf(buf, \"%s\\n\", xd->vendor_name ? xd->vendor_name \n> : \"\");\n> +\tmutex_unlock(&xd->lock);\n> +\n> +\treturn ret;\n> +}\n> +static DEVICE_ATTR_RO(vendor_name);\n> +\n> +static ssize_t unique_id_show(struct device *dev, struct\n> device_attribute *attr,\n> +\t\t\t      char *buf)\n> +{\n> +\tstruct tb_xdomain *xd = container_of(dev, struct tb_xdomain,\n> dev);\n> +\n> +\treturn sprintf(buf, \"%pUb\\n\", xd->remote_uuid);\n> +}\n> +static DEVICE_ATTR_RO(unique_id);\n> +\n> +static struct attribute *xdomain_attrs[] = {\n> +\t&dev_attr_device.attr,\n> +\t&dev_attr_device_name.attr,\n> +\t&dev_attr_unique_id.attr,\n> +\t&dev_attr_vendor.attr,\n> +\t&dev_attr_vendor_name.attr,\n> +\tNULL,\n> +};\n> +\n> +static struct attribute_group xdomain_attr_group = {\n> +\t.attrs = xdomain_attrs,\n> +};\n> +\n> +static const struct attribute_group *xdomain_attr_groups[] = {\n> +\t&xdomain_attr_group,\n> +\tNULL,\n> +};\n> +\n> +static void tb_xdomain_release(struct device *dev)\n> +{\n> +\tstruct tb_xdomain *xd = container_of(dev, struct tb_xdomain,\n> dev);\n> +\n> +\tput_device(xd->dev.parent);\n> +\n> +\ttb_property_free_dir(xd->properties);\n> +\tida_destroy(&xd->service_ids);\n> +\n> +\tkfree(xd->local_uuid);\n> +\tkfree(xd->remote_uuid);\n> +\tkfree(xd->device_name);\n> +\tkfree(xd->vendor_name);\n> +\tkfree(xd);\n> +}\n> +\n> +static void start_handshake(struct tb_xdomain *xd)\n> +{\n> +\txd->properties_retries = XDOMAIN_PROPERTIES_RETRIES;\n> +\txd->properties_changed_retries =\n> XDOMAIN_PROPERTIES_CHANGED_RETRIES;\n> +\n> +\t/* Start exchanging properties with the other host */\n> +\tqueue_delayed_work(xd->tb->wq, &xd->properties_changed_work,\n> +\t\t\t   msecs_to_jiffies(100));\n> +\tqueue_delayed_work(xd->tb->wq, &xd->get_properties_work,\n> +\t\t\t   msecs_to_jiffies(1000));\n> +}\n> +\n> +static void stop_handshake(struct tb_xdomain *xd)\n> +{\n> +\txd->properties_retries = 0;\n> +\txd->properties_changed_retries = 0;\n> +\n> +\tcancel_delayed_work_sync(&xd->get_properties_work);\n> +\tcancel_delayed_work_sync(&xd->properties_changed_work);\n> +}\n> +\n> +static int __maybe_unused tb_xdomain_suspend(struct device *dev)\n> +{\n> +\tstop_handshake(tb_to_xdomain(dev));\n> +\treturn 0;\n> +}\n> +\n> +static int __maybe_unused tb_xdomain_resume(struct device *dev)\n> +{\n> +\tstruct tb_xdomain *xd = tb_to_xdomain(dev);\n> +\n> +\t/*\n> +\t * Ask tb_xdomain_get_properties() restore any existing DMA\n> +\t * paths after properties are re-read.\n> +\t */\n> +\txd->resume = true;\n> +\tstart_handshake(xd);\n> +\n> +\treturn 0;\n> +}\n> +\n> +static const struct dev_pm_ops tb_xdomain_pm_ops = {\n> +\tSET_SYSTEM_SLEEP_PM_OPS(tb_xdomain_suspend,\n> tb_xdomain_resume)\n> +};\n> +\n> +struct device_type tb_xdomain_type = {\n> +\t.name = \"thunderbolt_xdomain\",\n> +\t.release = tb_xdomain_release,\n> +\t.pm = &tb_xdomain_pm_ops,\n> +};\n> +EXPORT_SYMBOL_GPL(tb_xdomain_type);\n> +\n> +/**\n> + * tb_xdomain_alloc() - Allocate new XDomain object\n> + * @tb: Domain where the XDomain belongs\n> + * @parent: Parent device (the switch through the connection to the\n> + *\t    other domain is reached).\n> + * @route: Route string used to reach the other domain\n> + * @local_uuid: Our local domain UUID\n> + * @remote_uuid: UUID of the other domain\n> + *\n> + * Allocates new XDomain structure and returns pointer to that. The\n> + * object must be released by calling tb_xdomain_put().\n> + */\n> +struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device\n> *parent,\n> +\t\t\t\t    u64 route, const uuid_t\n> *local_uuid,\n> +\t\t\t\t    const uuid_t *remote_uuid)\n> +{\n> +\tstruct tb_xdomain *xd;\n> +\n> +\txd = kzalloc(sizeof(*xd), GFP_KERNEL);\n> +\tif (!xd)\n> +\t\treturn NULL;\n> +\n> +\txd->tb = tb;\n> +\txd->route = route;\n> +\tida_init(&xd->service_ids);\n> +\tmutex_init(&xd->lock);\n> +\tINIT_DELAYED_WORK(&xd->get_properties_work,\n> tb_xdomain_get_properties);\n> +\tINIT_DELAYED_WORK(&xd->properties_changed_work,\n> +\t\t\t  tb_xdomain_properties_changed);\n> +\n> +\txd->local_uuid = kmemdup(local_uuid, sizeof(uuid_t),\n> GFP_KERNEL);\n> +\tif (!xd->local_uuid)\n> +\t\tgoto err_free;\n> +\n> +\txd->remote_uuid = kmemdup(remote_uuid, sizeof(uuid_t),\n> GFP_KERNEL);\n> +\tif (!xd->remote_uuid)\n> +\t\tgoto err_free_local_uuid;\n> +\n> +\tdevice_initialize(&xd->dev);\n> +\txd->dev.parent = get_device(parent);\n> +\txd->dev.bus = &tb_bus_type;\n> +\txd->dev.type = &tb_xdomain_type;\n> +\txd->dev.groups = xdomain_attr_groups;\n> +\tdev_set_name(&xd->dev, \"%u-%llx\", tb->index, route);\n> +\n> +\treturn xd;\n> +\n> +err_free_local_uuid:\n> +\tkfree(xd->local_uuid);\n> +err_free:\n> +\tkfree(xd);\n> +\n> +\treturn NULL;\n> +}\n> +\n> +/**\n> + * tb_xdomain_add() - Add XDomain to the bus\n> + * @xd: XDomain to add\n> + *\n> + * This function starts XDomain discovery protocol handshake and\n> + * eventually adds the XDomain to the bus. After calling this\n> function\n> + * the caller needs to call tb_xdomain_remove() in order to remove\n> and\n> + * release the object regardless whether the handshake succeeded or\n> not.\n> + */\n> +void tb_xdomain_add(struct tb_xdomain *xd)\n> +{\n> +\t/* Start exchanging properties with the other host */\n> +\tstart_handshake(xd);\n> +}\n> +\n> +static int unregister_service(struct device *dev, void *data)\n> +{\n> +\tdevice_unregister(dev);\n> +\treturn 0;\n> +}\n> +\n> +/**\n> + * tb_xdomain_remove() - Remove XDomain from the bus\n> + * @xd: XDomain to remove\n> + *\n> + * This will stop all ongoing configuration work and remove the\n> XDomain\n> + * along with any services from the bus. When the last reference to\n> @xd\n> + * is released the object will be released as well.\n> + */\n> +void tb_xdomain_remove(struct tb_xdomain *xd)\n> +{\n> +\tstop_handshake(xd);\n> +\n> +\tdevice_for_each_child_reverse(&xd->dev, xd,\n> unregister_service);\n> +\n> +\tif (!device_is_registered(&xd->dev))\n> +\t\tput_device(&xd->dev);\n> +\telse\n> +\t\tdevice_unregister(&xd->dev);\n> +}\n> +\n> +/**\n> + * tb_xdomain_enable_paths() - Enable DMA paths for XDomain\n> connection\n> + * @xd: XDomain connection\n> + * @transmit_path: HopID of the transmit path the other end is using\n> to\n> + *\t\t   send packets\n> + * @transmit_ring: DMA ring used to receive packets from the other\n> end\n> + * @receive_path: HopID of the receive path the other end is using\n> to\n> + *\t\t  receive packets\n> + * @receive_ring: DMA ring used to send packets to the other end\n> + *\n> + * The function enables DMA paths accordingly so that after\n> successful\n> + * return the caller can send and receive packets using high-speed\n> DMA\n> + * path.\n> + *\n> + * Return: %0 in case of success and negative errno in case of error\n> + */\n> +int tb_xdomain_enable_paths(struct tb_xdomain *xd, u16\n> transmit_path,\n> +\t\t\t    u16 transmit_ring, u16 receive_path,\n> +\t\t\t    u16 receive_ring)\n> +{\n> +\tint ret;\n> +\n> +\tmutex_lock(&xd->lock);\n> +\n> +\tif (xd->transmit_path) {\n> +\t\tret = xd->transmit_path == transmit_path ? 0 :\n> -EBUSY;\n> +\t\tgoto exit_unlock;\n> +\t}\n> +\n> +\txd->transmit_path = transmit_path;\n> +\txd->transmit_ring = transmit_ring;\n> +\txd->receive_path = receive_path;\n> +\txd->receive_ring = receive_ring;\n> +\n> +\tret = tb_domain_approve_xdomain_paths(xd->tb, xd);\n> +\n> +exit_unlock:\n> +\tmutex_unlock(&xd->lock);\n> +\n> +\treturn ret;\n> +}\n> +EXPORT_SYMBOL_GPL(tb_xdomain_enable_paths);\n> +\n> +/**\n> + * tb_xdomain_disable_paths() - Disable DMA paths for XDomain\n> connection\n> + * @xd: XDomain connection\n> + *\n> + * This does the opposite of tb_xdomain_enable_paths(). After call\n> to\n> + * this the caller is not expected to use the rings anymore.\n> + *\n> + * Return: %0 in case of success and negative errno in case of error\n> + */\n> +int tb_xdomain_disable_paths(struct tb_xdomain *xd)\n> +{\n> +\tint ret = 0;\n> +\n> +\tmutex_lock(&xd->lock);\n> +\tif (xd->transmit_path) {\n> +\t\txd->transmit_path = 0;\n> +\t\txd->transmit_ring = 0;\n> +\t\txd->receive_path = 0;\n> +\t\txd->receive_ring = 0;\n> +\n> +\t\tret = tb_domain_disconnect_xdomain_paths(xd->tb,\n> xd);\n> +\t}\n> +\tmutex_unlock(&xd->lock);\n> +\n> +\treturn ret;\n> +}\n> +EXPORT_SYMBOL_GPL(tb_xdomain_disable_paths);\n> +\n> +struct tb_xdomain_lookup {\n> +\tconst uuid_t *uuid;\n> +\tu8 link;\n> +\tu8 depth;\n> +};\n> +\n> +static struct tb_xdomain *switch_find_xdomain(struct tb_switch *sw,\n> +\tconst struct tb_xdomain_lookup *lookup)\n> +{\n> +\tint i;\n> +\n> +\tfor (i = 1; i <= sw->config.max_port_number; i++) {\n> +\t\tstruct tb_port *port = &sw->ports[i];\n> +\t\tstruct tb_xdomain *xd;\n> +\n> +\t\tif (tb_is_upstream_port(port))\n> +\t\t\tcontinue;\n> +\n> +\t\tif (port->xdomain) {\n> +\t\t\txd = port->xdomain;\n> +\n> +\t\t\tif (lookup->uuid) {\n> +\t\t\t\tif (uuid_equal(xd->remote_uuid,\n> lookup->uuid))\n> +\t\t\t\t\treturn xd;\n> +\t\t\t} else if (lookup->link == xd->link &&\n> +\t\t\t\t   lookup->depth == xd->depth) {\n> +\t\t\t\treturn xd;\n> +\t\t\t}\n> +\t\t} else if (port->remote) {\n> +\t\t\txd = switch_find_xdomain(port->remote->sw,\n> lookup);\n> +\t\t\tif (xd)\n> +\t\t\t\treturn xd;\n> +\t\t}\n> +\t}\n> +\n> +\treturn NULL;\n> +}\n> +\n> +/**\n> + * tb_xdomain_find_by_uuid() - Find an XDomain by UUID\n> + * @tb: Domain where the XDomain belongs to\n> + * @uuid: UUID to look for\n> + *\n> + * Finds XDomain by walking through the Thunderbolt topology below\n> @tb.\n> + * The returned XDomain will have its reference count increased so\n> the\n> + * caller needs to call tb_xdomain_put() when it is done with the\n> + * object.\n> + *\n> + * This will find all XDomains including the ones that are not yet\n> added\n> + * to the bus (handshake is still in progress).\n> + *\n> + * The caller needs to hold @tb->lock.\n> + */\n> +struct tb_xdomain *tb_xdomain_find_by_uuid(struct tb *tb, const\n> uuid_t *uuid)\n> +{\n> +\tstruct tb_xdomain_lookup lookup;\n> +\tstruct tb_xdomain *xd;\n> +\n> +\tmemset(&lookup, 0, sizeof(lookup));\n> +\tlookup.uuid = uuid;\n> +\n> +\txd = switch_find_xdomain(tb->root_switch, &lookup);\n> +\tif (xd) {\n> +\t\tget_device(&xd->dev);\n> +\t\treturn xd;\n> +\t}\n> +\n> +\treturn NULL;\n> +}\n> +EXPORT_SYMBOL_GPL(tb_xdomain_find_by_uuid);\n> +\n> +/**\n> + * tb_xdomain_find_by_link_depth() - Find an XDomain by link and\n> depth\n> + * @tb: Domain where the XDomain belongs to\n> + * @link: Root switch link number\n> + * @depth: Depth in the link\n> + *\n> + * Finds XDomain by walking through the Thunderbolt topology below\n> @tb.\n> + * The returned XDomain will have its reference count increased so\n> the\n> + * caller needs to call tb_xdomain_put() when it is done with the\n> + * object.\n> + *\n> + * This will find all XDomains including the ones that are not yet\n> added\n> + * to the bus (handshake is still in progress).\n> + *\n> + * The caller needs to hold @tb->lock.\n> + */\n> +struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8\n> link,\n> +\t\t\t\t\t\t u8 depth)\n> +{\n> +\tstruct tb_xdomain_lookup lookup;\n> +\tstruct tb_xdomain *xd;\n> +\n> +\tmemset(&lookup, 0, sizeof(lookup));\n> +\tlookup.link = link;\n> +\tlookup.depth = depth;\n> +\n> +\txd = switch_find_xdomain(tb->root_switch, &lookup);\n> +\tif (xd) {\n> +\t\tget_device(&xd->dev);\n> +\t\treturn xd;\n> +\t}\n> +\n> +\treturn NULL;\n> +}\n> +\n> +bool tb_xdomain_handle_request(struct tb *tb, enum tb_cfg_pkg_type\n> type,\n> +\t\t\t       const void *buf, size_t size)\n> +{\n> +\tconst struct tb_protocol_handler *handler, *tmp;\n> +\tconst struct tb_xdp_header *hdr = buf;\n> +\tunsigned int length;\n> +\tint ret = 0;\n> +\n> +\t/* We expect the packet is at least size of the header */\n> +\tlength = hdr->xd_hdr.length_sn & TB_XDOMAIN_LENGTH_MASK;\n> +\tif (length != size / 4 - sizeof(hdr->xd_hdr) / 4)\n> +\t\treturn true;\n> +\tif (length < sizeof(*hdr) / 4 - sizeof(hdr->xd_hdr) / 4)\n> +\t\treturn true;\n> +\n> +\t/*\n> +\t * Handle XDomain discovery protocol packets directly here.\n> For\n> +\t * other protocols (based on their UUID) we call registered\n> +\t * handlers in turn.\n> +\t */\n> +\tif (uuid_equal(&hdr->uuid, &tb_xdp_uuid)) {\n> +\t\tif (type == TB_CFG_PKG_XDOMAIN_REQ) {\n> +\t\t\ttb_xdp_schedule_request(tb, hdr, size);\n> +\t\t\treturn true;\n> +\t\t}\n> +\t\treturn false;\n> +\t}\n> +\n> +\tmutex_lock(&xdomain_lock);\n> +\tlist_for_each_entry_safe(handler, tmp, &protocol_handlers,\n> list) {\n> +\t\tif (!uuid_equal(&hdr->uuid, handler->uuid))\n> +\t\t\tcontinue;\n> +\n> +\t\tmutex_unlock(&xdomain_lock);\n> +\t\tret = handler->callback(buf, size, handler->data);\n> +\t\tmutex_lock(&xdomain_lock);\n> +\n> +\t\tif (ret)\n> +\t\t\tbreak;\n> +\t}\n> +\tmutex_unlock(&xdomain_lock);\n> +\n> +\treturn ret > 0;\n> +}\n> +\n> +static int rebuild_property_block(void)\n> +{\n> +\tu32 *block, len;\n> +\tint ret;\n> +\n> +\tret = tb_property_format_dir(xdomain_property_dir, NULL, 0);\n> +\tif (ret < 0)\n> +\t\treturn ret;\n> +\n> +\tlen = ret;\n> +\n> +\tblock = kcalloc(len, sizeof(u32), GFP_KERNEL);\n> +\tif (!block)\n> +\t\treturn -ENOMEM;\n> +\n> +\tret = tb_property_format_dir(xdomain_property_dir, block,\n> len);\n> +\tif (ret) {\n> +\t\tkfree(block);\n> +\t\treturn ret;\n> +\t}\n> +\n> +\tkfree(xdomain_property_block);\n> +\txdomain_property_block = block;\n> +\txdomain_property_block_len = len;\n> +\txdomain_property_block_gen++;\n> +\n> +\treturn 0;\n> +}\n> +\n> +static int update_xdomain(struct device *dev, void *data)\n> +{\n> +\tstruct tb_xdomain *xd;\n> +\n> +\txd = tb_to_xdomain(dev);\n> +\tif (xd) {\n> +\t\tqueue_delayed_work(xd->tb->wq, &xd-\n> >properties_changed_work,\n> +\t\t\t\t   msecs_to_jiffies(50));\n> +\t}\n> +\n> +\treturn 0;\n> +}\n> +\n> +static void update_all_xdomains(void)\n> +{\n> +\tbus_for_each_dev(&tb_bus_type, NULL, NULL, update_xdomain);\n> +}\n> +\n> +static bool remove_directory(const char *key, const struct\n> tb_property_dir *dir)\n> +{\n> +\tstruct tb_property *p;\n> +\n> +\tp = tb_property_find(xdomain_property_dir, key,\n> +\t\t\t     TB_PROPERTY_TYPE_DIRECTORY);\n> +\tif (p && p->value.dir == dir) {\n> +\t\ttb_property_remove(p);\n> +\t\treturn true;\n> +\t}\n> +\treturn false;\n> +}\n> +\n> +/**\n> + * tb_register_property_dir() - Register property directory to the\n> host\n> + * @key: Key (name) of the directory to add\n> + * @dir: Directory to add\n> + *\n> + * Service drivers can use this function to add new property\n> directory\n> + * to the host available properties. The other connected hosts are\n> + * notified so they can re-read properties of this host if they are\n> + * interested.\n> + *\n> + * Return: %0 on success and negative errno on failure\n> + */\n> +int tb_register_property_dir(const char *key, struct tb_property_dir\n> *dir)\n> +{\n> +\tint ret;\n> +\n> +\tif (!key || strlen(key) > 8)\n> +\t\treturn -EINVAL;\n> +\n> +\tmutex_lock(&xdomain_lock);\n> +\tif (tb_property_find(xdomain_property_dir, key,\n> +\t\t\t     TB_PROPERTY_TYPE_DIRECTORY)) {\n> +\t\tret = -EEXIST;\n> +\t\tgoto err_unlock;\n> +\t}\n> +\n> +\tret = tb_property_add_dir(xdomain_property_dir, key, dir);\n> +\tif (ret)\n> +\t\tgoto err_unlock;\n> +\n> +\tret = rebuild_property_block();\n> +\tif (ret) {\n> +\t\tremove_directory(key, dir);\n> +\t\tgoto err_unlock;\n> +\t}\n> +\n> +\tmutex_unlock(&xdomain_lock);\n> +\tupdate_all_xdomains();\n> +\treturn 0;\n> +\n> +err_unlock:\n> +\tmutex_unlock(&xdomain_lock);\n> +\treturn ret;\n> +}\n> +EXPORT_SYMBOL_GPL(tb_register_property_dir);\n> +\n> +/**\n> + * tb_unregister_property_dir() - Removes property directory from\n> host\n> + * @key: Key (name) of the directory\n> + * @dir: Directory to remove\n> + *\n> + * This will remove the existing directory from this host and notify\n> the\n> + * connected hosts about the change.\n> + */\n> +void tb_unregister_property_dir(const char *key, struct\n> tb_property_dir *dir)\n> +{\n> +\tint ret = 0;\n> +\n> +\tmutex_lock(&xdomain_lock);\n> +\tif (remove_directory(key, dir))\n> +\t\tret = rebuild_property_block();\n> +\tmutex_unlock(&xdomain_lock);\n> +\n> +\tif (!ret)\n> +\t\tupdate_all_xdomains();\n> +}\n> +EXPORT_SYMBOL_GPL(tb_unregister_property_dir);\n> +\n> +int tb_xdomain_init(void)\n> +{\n> +\tint ret;\n> +\n> +\txdomain_property_dir = tb_property_create_dir(NULL);\n> +\tif (!xdomain_property_dir)\n> +\t\treturn -ENOMEM;\n> +\n> +\t/*\n> +\t * Initialize standard set of properties without any service\n> +\t * directories. Those will be added by service drivers\n> +\t * themselves when they are loaded.\n> +\t */\n> +\ttb_property_add_immediate(xdomain_property_dir, \"vendorid\",\n> +\t\t\t\t  PCI_VENDOR_ID_INTEL);\n> +\ttb_property_add_text(xdomain_property_dir, \"vendorid\",\n> \"Intel Corp.\");\n> +\ttb_property_add_immediate(xdomain_property_dir, \"deviceid\",\n> 0x1);\n> +\ttb_property_add_text(xdomain_property_dir, \"deviceid\",\n> +\t\t\t     utsname()->nodename);\n> +\ttb_property_add_immediate(xdomain_property_dir, \"devicerv\",\n> 0x80000100);\n> +\n> +\tret = rebuild_property_block();\n> +\tif (ret) {\n> +\t\ttb_property_free_dir(xdomain_property_dir);\n> +\t\txdomain_property_dir = NULL;\n> +\t}\n> +\n> +\treturn ret;\n> +}\n> +\n> +void tb_xdomain_exit(void)\n> +{\n> +\tkfree(xdomain_property_block);\n> +\ttb_property_free_dir(xdomain_property_dir);\n> +}\n> diff --git a/include/linux/mod_devicetable.h\n> b/include/linux/mod_devicetable.h\n> index 694cebb50f72..7625c3b81f84 100644\n> --- a/include/linux/mod_devicetable.h\n> +++ b/include/linux/mod_devicetable.h\n> @@ -683,5 +683,31 @@ struct fsl_mc_device_id {\n>  \tconst char obj_type[16];\n>  };\n>  \n> +/**\n> + * struct tb_service_id - Thunderbolt service identifiers\n> + * @match_flags: Flags used to match the structure\n> + * @protocol_key: Protocol key the service supports\n> + * @protocol_id: Protocol id the service supports\n> + * @protocol_version: Version of the protocol\n> + * @protocol_revision: Revision of the protocol software\n> + * @driver_data: Driver specific data\n> + *\n> + * Thunderbolt XDomain services are exposed as devices where each\n> device\n> + * carries the protocol information the service supports.\n> Thunderbolt\n> + * XDomain service drivers match against that information.\n> + */\n> +struct tb_service_id {\n> +\t__u32 match_flags;\n> +\tchar protocol_key[8 + 1];\n> +\t__u32 protocol_id;\n> +\t__u32 protocol_version;\n> +\t__u32 protocol_revision;\n> +\tkernel_ulong_t driver_data;\n> +};\n> +\n> +#define TBSVC_MATCH_PROTOCOL_KEY\t0x0001\n> +#define TBSVC_MATCH_PROTOCOL_ID\t\t0x0002\n> +#define TBSVC_MATCH_PROTOCOL_VERSION\t0x0004\n> +#define TBSVC_MATCH_PROTOCOL_REVISION\t0x0008\n>  \n>  #endif /* LINUX_MOD_DEVICETABLE_H */\n> diff --git a/include/linux/thunderbolt.h\n> b/include/linux/thunderbolt.h\n> index 4011d6537a8c..79abdaf1c296 100644\n> --- a/include/linux/thunderbolt.h\n> +++ b/include/linux/thunderbolt.h\n> @@ -17,6 +17,7 @@\n>  #include <linux/device.h>\n>  #include <linux/list.h>\n>  #include <linux/mutex.h>\n> +#include <linux/mod_devicetable.h>\n>  #include <linux/uuid.h>\n>  \n>  enum tb_cfg_pkg_type {\n> @@ -77,6 +78,8 @@ struct tb {\n>  };\n>  \n>  extern struct bus_type tb_bus_type;\n> +extern struct device_type tb_service_type;\n> +extern struct device_type tb_xdomain_type;\n>  \n>  #define TB_LINKS_PER_PHY_PORT\t2\n>  \n> @@ -155,4 +158,243 @@ struct tb_property *tb_property_get_next(struct\n> tb_property_dir *dir,\n>  \t     property;\t\t\t\t\t\t\n> \\\n>  \t     property = tb_property_get_next(dir, property))\n>  \n> +int tb_register_property_dir(const char *key, struct tb_property_dir\n> *dir);\n> +void tb_unregister_property_dir(const char *key, struct\n> tb_property_dir *dir);\n> +\n> +/**\n> + * struct tb_xdomain - Cross-domain (XDomain) connection\n> + * @dev: XDomain device\n> + * @tb: Pointer to the domain\n> + * @remote_uuid: UUID of the remote domain (host)\n> + * @local_uuid: Cached local UUID\n> + * @route: Route string the other domain can be reached\n> + * @vendor: Vendor ID of the remote domain\n> + * @device: Device ID of the demote domain\n> + * @lock: Lock to serialize access to the following fields of this\n> structure\n> + * @vendor_name: Name of the vendor (or %NULL if not known)\n> + * @device_name: Name of the device (or %NULL if not known)\n> + * @is_unplugged: The XDomain is unplugged\n> + * @resume: The XDomain is being resumed\n> + * @transmit_path: HopID which the remote end expects us to transmit\n> + * @transmit_ring: Local ring (hop) where outgoing packets are\n> pushed\n> + * @receive_path: HopID which we expect the remote end to transmit\n> + * @receive_ring: Local ring (hop) where incoming packets arrive\n> + * @service_ids: Used to generate IDs for the services\n> + * @properties: Properties exported by the remote domain\n> + * @property_block_gen: Generation of @properties\n> + * @properties_lock: Lock protecting @properties.\n> + * @get_properties_work: Work used to get remote domain properties\n> + * @properties_retries: Number of times left to read properties\n> + * @properties_changed_work: Work used to notify the remote domain\n> that\n> + *\t\t\t     our properties have changed\n> + * @properties_changed_retries: Number of times left to send\n> properties\n> + *\t\t\t\tchanged notification\n> + * @link: Root switch link the remote domain is connected (ICM only)\n> + * @depth: Depth in the chain the remote domain is connected (ICM\n> only)\n> + *\n> + * This structure represents connection across two domains (hosts).\n> + * Each XDomain contains zero or more services which are exposed as\n> + * &struct tb_service objects.\n> + *\n> + * Service drivers may access this structure if they need to\n> enumerate\n> + * non-standard properties but they need hold @lock when doing so\n> + * because properties can be changed asynchronously in response to\n> + * changes in the remote domain.\n> + */\n> +struct tb_xdomain {\n> +\tstruct device dev;\n> +\tstruct tb *tb;\n> +\tuuid_t *remote_uuid;\n> +\tconst uuid_t *local_uuid;\n> +\tu64 route;\n> +\tu16 vendor;\n> +\tu16 device;\n> +\tstruct mutex lock;\n> +\tconst char *vendor_name;\n> +\tconst char *device_name;\n> +\tbool is_unplugged;\n> +\tbool resume;\n> +\tu16 transmit_path;\n> +\tu16 transmit_ring;\n> +\tu16 receive_path;\n> +\tu16 receive_ring;\n> +\tstruct ida service_ids;\n> +\tstruct tb_property_dir *properties;\n> +\tu32 property_block_gen;\n> +\tstruct delayed_work get_properties_work;\n> +\tint properties_retries;\n> +\tstruct delayed_work properties_changed_work;\n> +\tint properties_changed_retries;\n> +\tu8 link;\n> +\tu8 depth;\n> +};\n> +\n> +int tb_xdomain_enable_paths(struct tb_xdomain *xd, u16\n> transmit_path,\n> +\t\t\t    u16 transmit_ring, u16 receive_path,\n> +\t\t\t    u16 receive_ring);\n> +int tb_xdomain_disable_paths(struct tb_xdomain *xd);\n> +struct tb_xdomain *tb_xdomain_find_by_uuid(struct tb *tb, const\n> uuid_t *uuid);\n> +\n> +static inline struct tb_xdomain *\n> +tb_xdomain_find_by_uuid_locked(struct tb *tb, const uuid_t *uuid)\n> +{\n> +\tstruct tb_xdomain *xd;\n> +\n> +\tmutex_lock(&tb->lock);\n> +\txd = tb_xdomain_find_by_uuid(tb, uuid);\n> +\tmutex_unlock(&tb->lock);\n> +\n> +\treturn xd;\n> +}\n> +\n> +static inline struct tb_xdomain *tb_xdomain_get(struct tb_xdomain\n> *xd)\n> +{\n> +\tif (xd)\n> +\t\tget_device(&xd->dev);\n> +\treturn xd;\n> +}\n> +\n> +static inline void tb_xdomain_put(struct tb_xdomain *xd)\n> +{\n> +\tif (xd)\n> +\t\tput_device(&xd->dev);\n> +}\n> +\n> +static inline bool tb_is_xdomain(const struct device *dev)\n> +{\n> +\treturn dev->type == &tb_xdomain_type;\n> +}\n> +\n> +static inline struct tb_xdomain *tb_to_xdomain(struct device *dev)\n> +{\n> +\tif (tb_is_xdomain(dev))\n> +\t\treturn container_of(dev, struct tb_xdomain, dev);\n> +\treturn NULL;\n> +}\n> +\n> +int tb_xdomain_response(struct tb_xdomain *xd, const void *response,\n> +\t\t\tsize_t size, enum tb_cfg_pkg_type type);\n> +int tb_xdomain_request(struct tb_xdomain *xd, const void *request,\n> +\t\t       size_t request_size, enum tb_cfg_pkg_type\n> request_type,\n> +\t\t       void *response, size_t response_size,\n> +\t\t       enum tb_cfg_pkg_type response_type,\n> +\t\t       unsigned int timeout_msec);\n> +\n> +/**\n> + * tb_protocol_handler - Protocol specific handler\n> + * @uuid: XDomain messages with this UUID are dispatched to this\n> handler\n> + * @callback: Callback called with the XDomain message. Returning %1\n> + *\t      here tells the XDomain core that the message was\n> handled\n> + *\t      by this handler and should not be forwared to other\n> + *\t      handlers.\n> + * @data: Data passed with the callback\n> + * @list: Handlers are linked using this\n> + *\n> + * Thunderbolt services can hook into incoming XDomain requests by\n> + * registering protocol handler. Only limitation is that the XDomain\n> + * discovery protocol UUID cannot be registered since it is handled\n> by\n> + * the core XDomain code.\n> + *\n> + * The @callback must check that the message is really directed to\n> the\n> + * service the driver implements.\n> + */\n> +struct tb_protocol_handler {\n> +\tconst uuid_t *uuid;\n> +\tint (*callback)(const void *buf, size_t size, void *data);\n> +\tvoid *data;\n> +\tstruct list_head list;\n> +};\n> +\n> +int tb_register_protocol_handler(struct tb_protocol_handler\n> *handler);\n> +void tb_unregister_protocol_handler(struct tb_protocol_handler\n> *handler);\n> +\n> +/**\n> + * struct tb_service - Thunderbolt service\n> + * @dev: XDomain device\n> + * @id: ID of the service (shown in sysfs)\n> + * @key: Protocol key from the properties directory\n> + * @prtcid: Protocol ID from the properties directory\n> + * @prtcvers: Protocol version from the properties directory\n> + * @prtcrevs: Protocol software revision from the properties\n> directory\n> + * @prtcstns: Protocol settings mask from the properties directory\n> + *\n> + * Each domain exposes set of services it supports as collection of\n> + * properties. For each service there will be one corresponding\n> + * &struct tb_service. Service drivers are bound to these.\n> + */\n> +struct tb_service {\n> +\tstruct device dev;\n> +\tint id;\n> +\tconst char *key;\n> +\tu32 prtcid;\n> +\tu32 prtcvers;\n> +\tu32 prtcrevs;\n> +\tu32 prtcstns;\n> +};\n> +\n> +static inline struct tb_service *tb_service_get(struct tb_service\n> *svc)\n> +{\n> +\tif (svc)\n> +\t\tget_device(&svc->dev);\n> +\treturn svc;\n> +}\n> +\n> +static inline void tb_service_put(struct tb_service *svc)\n> +{\n> +\tif (svc)\n> +\t\tput_device(&svc->dev);\n> +}\n> +\n> +static inline bool tb_is_service(const struct device *dev)\n> +{\n> +\treturn dev->type == &tb_service_type;\n> +}\n> +\n> +static inline struct tb_service *tb_to_service(struct device *dev)\n> +{\n> +\tif (tb_is_service(dev))\n> +\t\treturn container_of(dev, struct tb_service, dev);\n> +\treturn NULL;\n> +}\n> +\n> +/**\n> + * tb_service_driver - Thunderbolt service driver\n> + * @driver: Driver structure\n> + * @probe: Called when the driver is probed\n> + * @remove: Called when the driver is removed (optional)\n> + * @shutdown: Called at shutdown time to stop the service (optional)\n> + * @id_table: Table of service identifiers the driver supports\n> + */\n> +struct tb_service_driver {\n> +\tstruct device_driver driver;\n> +\tint (*probe)(struct tb_service *svc, const struct\n> tb_service_id *id);\n> +\tvoid (*remove)(struct tb_service *svc);\n> +\tvoid (*shutdown)(struct tb_service *svc);\n> +\tconst struct tb_service_id *id_table;\n> +};\n> +\n> +#define TB_SERVICE(key, id)\t\t\t\t\\\n> +\t.match_flags = TBSVC_MATCH_PROTOCOL_KEY |\t\\\n> +\t\t       TBSVC_MATCH_PROTOCOL_ID,\t\t\\\n> +\t.protocol_key = (key),\t\t\t\t\\\n> +\t.protocol_id = (id)\n> +\n> +int tb_register_service_driver(struct tb_service_driver *drv);\n> +void tb_unregister_service_driver(struct tb_service_driver *drv);\n> +\n> +static inline void *tb_service_get_drvdata(const struct tb_service\n> *svc)\n> +{\n> +\treturn dev_get_drvdata(&svc->dev);\n> +}\n> +\n> +static inline void tb_service_set_drvdata(struct tb_service *svc,\n> void *data)\n> +{\n> +\tdev_set_drvdata(&svc->dev, data);\n> +}\n> +\n> +static inline struct tb_xdomain *tb_service_parent(struct tb_service\n> *svc)\n> +{\n> +\treturn tb_to_xdomain(svc->dev.parent);\n> +}\n> +\n>  #endif /* THUNDERBOLT_H_ */\n> diff --git a/scripts/mod/devicetable-offsets.c\n> b/scripts/mod/devicetable-offsets.c\n> index e4d90e50f6fe..57263f2f8f2f 100644\n> --- a/scripts/mod/devicetable-offsets.c\n> +++ b/scripts/mod/devicetable-offsets.c\n> @@ -206,5 +206,12 @@ int main(void)\n>  \tDEVID_FIELD(fsl_mc_device_id, vendor);\n>  \tDEVID_FIELD(fsl_mc_device_id, obj_type);\n>  \n> +\tDEVID(tb_service_id);\n> +\tDEVID_FIELD(tb_service_id, match_flags);\n> +\tDEVID_FIELD(tb_service_id, protocol_key);\n> +\tDEVID_FIELD(tb_service_id, protocol_id);\n> +\tDEVID_FIELD(tb_service_id, protocol_version);\n> +\tDEVID_FIELD(tb_service_id, protocol_revision);\n> +\n>  \treturn 0;\n>  }\n> diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c\n> index 29d6699d5a06..6ef6e63f96fd 100644\n> --- a/scripts/mod/file2alias.c\n> +++ b/scripts/mod/file2alias.c\n> @@ -1301,6 +1301,31 @@ static int do_fsl_mc_entry(const char\n> *filename, void *symval,\n>  }\n>  ADD_TO_DEVTABLE(\"fslmc\", fsl_mc_device_id, do_fsl_mc_entry);\n>  \n> +/* Looks like: tbsvc:kSpNvNrN */\n> +static int do_tbsvc_entry(const char *filename, void *symval, char\n> *alias)\n> +{\n> +\tDEF_FIELD(symval, tb_service_id, match_flags);\n> +\tDEF_FIELD_ADDR(symval, tb_service_id, protocol_key);\n> +\tDEF_FIELD(symval, tb_service_id, protocol_id);\n> +\tDEF_FIELD(symval, tb_service_id, protocol_version);\n> +\tDEF_FIELD(symval, tb_service_id, protocol_revision);\n> +\n> +\tstrcpy(alias, \"tbsvc:\");\n> +\tif (match_flags & TBSVC_MATCH_PROTOCOL_KEY)\n> +\t\tsprintf(alias + strlen(alias), \"k%s\",\n> *protocol_key);\n> +\telse\n> +\t\tstrcat(alias + strlen(alias), \"k*\");\n> +\tADD(alias, \"p\", match_flags & TBSVC_MATCH_PROTOCOL_ID,\n> protocol_id);\n> +\tADD(alias, \"v\", match_flags & TBSVC_MATCH_PROTOCOL_VERSION,\n> +\t    protocol_version);\n> +\tADD(alias, \"r\", match_flags & TBSVC_MATCH_PROTOCOL_REVISION,\n> +\t    protocol_revision);\n> +\n> +\tadd_wildcard(alias);\n> +\treturn 1;\n> +}\n> +ADD_TO_DEVTABLE(\"tbsvc\", tb_service_id, do_tbsvc_entry);\n> +\n>  /* Does namelen bytes of name exactly match the symbol? */\n>  static bool sym_is(const char *name, unsigned namelen, const char\n> *symbol)\n>  {","headers":{"Return-Path":"<netdev-owner@vger.kernel.org>","X-Original-To":"patchwork-incoming@ozlabs.org","Delivered-To":"patchwork-incoming@ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=netdev-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","ext-mx02.extmail.prod.ext.phx2.redhat.com;\n\tdmarc=none (p=none dis=none) header.from=redhat.com","ext-mx02.extmail.prod.ext.phx2.redhat.com;\n\tspf=fail smtp.mailfrom=dcbw@redhat.com"],"Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xwrd52r2Hz9s7F\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 19 Sep 2017 02:12:41 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1756003AbdIRQMZ (ORCPT <rfc822;patchwork-incoming@ozlabs.org>);\n\tMon, 18 Sep 2017 12:12:25 -0400","from mx1.redhat.com ([209.132.183.28]:42648 \"EHLO mx1.redhat.com\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1752409AbdIRQMW (ORCPT <rfc822;netdev@vger.kernel.org>);\n\tMon, 18 Sep 2017 12:12:22 -0400","from smtp.corp.redhat.com\n\t(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby mx1.redhat.com (Postfix) with ESMTPS id EA7B8806CF;\n\tMon, 18 Sep 2017 16:12:21 +0000 (UTC)","from ovpn-112-34.rdu2.redhat.com (ovpn-112-34.rdu2.redhat.com\n\t[10.10.112.34])\n\tby smtp.corp.redhat.com (Postfix) with ESMTP id 6965718B8D;\n\tMon, 18 Sep 2017 16:12:18 +0000 (UTC)"],"DMARC-Filter":"OpenDMARC Filter v1.3.2 mx1.redhat.com EA7B8806CF","Message-ID":"<1505751137.11871.2.camel@redhat.com>","Subject":"Re: [PATCH 06/16] thunderbolt: Add support for XDomain discovery\n\tprotocol","From":"Dan Williams <dcbw@redhat.com>","To":"Mika Westerberg <mika.westerberg@linux.intel.com>,\n\tGreg Kroah-Hartman <gregkh@linuxfoundation.org>,\n\t\"David S . Miller\" <davem@davemloft.net>","Cc":"Andreas Noever <andreas.noever@gmail.com>,\n\tMichael Jamet <michael.jamet@intel.com>,\n\tYehezkel Bernat <yehezkel.bernat@intel.com>,\n\tAmir Levy <amir.jer.levy@intel.com>,\n\tMario.Limonciello@dell.com, Lukas Wunner <lukas@wunner.de>,\n\tAndy Shevchenko <andriy.shevchenko@linux.intel.com>,\n\tlinux-kernel@vger.kernel.org, netdev@vger.kernel.org","Date":"Mon, 18 Sep 2017 11:12:17 -0500","In-Reply-To":"<20170918153049.44185-7-mika.westerberg@linux.intel.com>","References":"<20170918153049.44185-1-mika.westerberg@linux.intel.com>\n\t<20170918153049.44185-7-mika.westerberg@linux.intel.com>","Content-Type":"text/plain; charset=\"UTF-8\"","Mime-Version":"1.0","Content-Transfer-Encoding":"8bit","X-Scanned-By":"MIMEDefang 2.79 on 10.5.11.12","X-Greylist":"Sender IP whitelisted, not delayed by milter-greylist-4.5.16\n\t(mx1.redhat.com [10.5.110.26]);\n\tMon, 18 Sep 2017 16:12:22 +0000 (UTC)","Sender":"netdev-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<netdev.vger.kernel.org>","X-Mailing-List":"netdev@vger.kernel.org"}},{"id":1770287,"web_url":"http://patchwork.ozlabs.org/comment/1770287/","msgid":"<1505751303.24112.0.camel@redhat.com>","list_archive_url":null,"date":"2017-09-18T16:15:03","subject":"Re: [PATCH 06/16] thunderbolt: Add support for XDomain discovery\n\tprotocol","submitter":{"id":665,"url":"http://patchwork.ozlabs.org/api/people/665/","name":"Dan Williams","email":"dcbw@redhat.com"},"content":"On Mon, 2017-09-18 at 11:12 -0500, Dan Williams wrote:\n> On Mon, 2017-09-18 at 18:30 +0300, Mika Westerberg wrote:\n> > When two hosts are connected over a Thunderbolt cable, there is a\n> > protocol they can use to communicate capabilities supported by the\n> > host.\n> > The discovery protocol uses automatically configured control\n> > channel\n> > (ring 0) and is build on top of request/response transactions using\n> > special XDomain primitives provided by the Thunderbolt base\n> > protocol.\n> > \n> > The capabilities consists of a root directory block of basic\n> > properties\n> > used for identification of the host, and then there can be zero or\n> > more\n> > directories each describing a Thunderbolt service and its\n> > capabilities.\n> > \n> > Once both sides have discovered what is supported the two hosts can\n> > setup high-speed DMA paths and transfer data to the other side\n> > using\n> > whatever protocol was agreed based on the properties. The software\n> > protocol used to communicate which DMA paths to enable is service\n> > specific.\n> > \n> > This patch adds support for the XDomain discovery protocol to the\n> > Thunderbolt bus. We model each remote host connection as a Linux\n> > XDomain\n> > device. For each Thunderbolt service found supported on the XDomain\n> > device, we create Linux Thunderbolt service device which\n> > Thunderbolt\n> > service drivers can then bind to based on the protocol\n> > identification\n> > information retrieved from the property directory describing the\n> > service.\n> > \n> > This code is based on the work done by Amir Levy and Michael Jamet.\n> > \n> > Signed-off-by: Michael Jamet <michael.jamet@intel.com>\n> > Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>\n> > Reviewed-by: Yehezkel Bernat <yehezkel.bernat@intel.com>\n> > ---\n> >  Documentation/ABI/testing/sysfs-bus-thunderbolt |   48 +\n> >  drivers/thunderbolt/Makefile                    |    2 +-\n> >  drivers/thunderbolt/ctl.c                       |   11 +-\n> >  drivers/thunderbolt/ctl.h                       |    2 +-\n> >  drivers/thunderbolt/domain.c                    |  197 ++-\n> >  drivers/thunderbolt/icm.c                       |  218 +++-\n> >  drivers/thunderbolt/nhi.h                       |    2 +\n> >  drivers/thunderbolt/switch.c                    |    7 +-\n> >  drivers/thunderbolt/tb.h                        |   39 +-\n> >  drivers/thunderbolt/tb_msgs.h                   |  123 ++\n> >  drivers/thunderbolt/xdomain.c                   | 1576\n> > +++++++++++++++++++++++\n> >  include/linux/mod_devicetable.h                 |   26 +\n> >  include/linux/thunderbolt.h                     |  242 ++++\n> >  scripts/mod/devicetable-offsets.c               |    7 +\n> >  scripts/mod/file2alias.c                        |   25 +\n> >  15 files changed, 2507 insertions(+), 18 deletions(-)\n> >  create mode 100644 drivers/thunderbolt/xdomain.c\n> > \n> > diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt\n> > b/Documentation/ABI/testing/sysfs-bus-thunderbolt\n> > index 392bef5bd399..cb48850bd79b 100644\n> > --- a/Documentation/ABI/testing/sysfs-bus-thunderbolt\n> > +++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt\n> > @@ -110,3 +110,51 @@ Description:\tWhen new NVM image is\n> > written to the non-active NVM\n> >  \t\tis directly the status value from the DMA\n> > configuration\n> >  \t\tbased mailbox before the device is power cycled.\n> > Writing\n> >  \t\t0 here clears the status.\n> > +\n> > +What:\t\t/sys/bus/thunderbolt/devices/<xdomain>.<servi\n> > ce\n> > > /key\n> > \n> > +Date:\t\tDec 2017\n> > +KernelVersion:\t4.14\n> > +Contact:\tthunderbolt-software@lists.01.org\n> > +Description:\tThis contains name of the property directory\n> > the\n> > XDomain\n> > +\t\tservice exposes. This entry describes the protocol\n> > in\n> > +\t\tquestion. Following directories are already\n> > reserved\n> > by\n> > +\t\tthe Apple XDomain specification:\n> > +\n> > +\t\tnetwork:  IP/ethernet over Thunderbolt\n> > +\t\ttargetdm: Target disk mode protocol over\n> > Thunderbolt\n> > +\t\textdisp:  External display mode protocol over\n> > Thunderbolt\n> > +\n> > +What:\t\t/sys/bus/thunderbolt/devices/<xdomain>.<servi\n> > ce\n> > > /modalias\n> > \n> > +Date:\t\tDec 2017\n> > +KernelVersion:\t4.14\n> > +Contact:\tthunderbolt-software@lists.01.org\n> > +Description:\tStores the same MODALIAS value emitted by\n> > uevent\n> > for\n> > +\t\tthe XDomain service. Format: tbtsvc:kSpNvNrN\n> > +\n> > +What:\t\t/sys/bus/thunderbolt/devices/<xdomain>.<servi\n> > ce\n> > > /prtcid\n> > \n> > +Date:\t\tDec 2017\n> > +KernelVersion:\t4.14\n> > +Contact:\tthunderbolt-software@lists.01.org\n> > +Description:\tThis contains XDomain protocol identifier the\n> > XDomain\n> > +\t\tservice supports.\n> > +\n> > +What:\t\t/sys/bus/thunderbolt/devices/<xdomain>.<servi\n> > ce\n> > > /prtcvers\n> > \n> > +Date:\t\tDec 2017\n> > +KernelVersion:\t4.14\n> > +Contact:\tthunderbolt-software@lists.01.org\n> > +Description:\tThis contains XDomain protocol version the\n> > XDomain\n> > +\t\tservice supports.\n> > +\n> > +What:\t\t/sys/bus/thunderbolt/devices/<xdomain>.<servi\n> > ce\n> > > /prtcrevs\n> > \n> > +Date:\t\tDec 2017\n> > +KernelVersion:\t4.14\n> > +Contact:\tthunderbolt-software@lists.01.org\n> > +Description:\tThis contains XDomain software version the\n> > XDomain\n> > +\t\tservice supports.\n> > +\n> > +What:\t\t/sys/bus/thunderbolt/devices/<xdomain>.<servi\n> > ce\n> > > /prtcstns\n> > \n> > +Date:\t\tDec 2017\n> > +KernelVersion:\t4.14\n> > +Contact:\tthunderbolt-software@lists.01.org\n> > +Description:\tThis contains XDomain service specific\n> > settings\n> > as\n> > +\t\tbitmask. Format: %x\n> > diff --git a/drivers/thunderbolt/Makefile\n> > b/drivers/thunderbolt/Makefile\n> > index 7afd21f5383a..f2f0de27252b 100644\n> > --- a/drivers/thunderbolt/Makefile\n> > +++ b/drivers/thunderbolt/Makefile\n> > @@ -1,3 +1,3 @@\n> >  obj-${CONFIG_THUNDERBOLT} := thunderbolt.o\n> >  thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o\n> > tunnel_pci.o eeprom.o\n> > -thunderbolt-objs += domain.o dma_port.o icm.o property.o\n> > +thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o\n> > diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c\n> > index e6a4c9458c76..46e393c5fd1d 100644\n> > --- a/drivers/thunderbolt/ctl.c\n> > +++ b/drivers/thunderbolt/ctl.c\n> > @@ -368,10 +368,10 @@ static int tb_ctl_tx(struct tb_ctl *ctl,\n> > const\n> > void *data, size_t len,\n> >  /**\n> >   * tb_ctl_handle_event() - acknowledge a plug event, invoke ctl-\n> > > callback\n> > \n> >   */\n> > -static void tb_ctl_handle_event(struct tb_ctl *ctl, enum\n> > tb_cfg_pkg_type type,\n> > +static bool tb_ctl_handle_event(struct tb_ctl *ctl, enum\n> > tb_cfg_pkg_type type,\n> >  \t\t\t\tstruct ctl_pkg *pkg, size_t size)\n> >  {\n> > -\tctl->callback(ctl->callback_data, type, pkg->buffer,\n> > size);\n> > +\treturn ctl->callback(ctl->callback_data, type, pkg-\n> > >buffer,\n> > size);\n> >  }\n> >  \n> >  static void tb_ctl_rx_submit(struct ctl_pkg *pkg)\n> > @@ -444,6 +444,8 @@ static void tb_ctl_rx_callback(struct tb_ring\n> > *ring, struct ring_frame *frame,\n> >  \t\tbreak;\n> >  \n> >  \tcase TB_CFG_PKG_EVENT:\n> > +\tcase TB_CFG_PKG_XDOMAIN_RESP:\n> > +\tcase TB_CFG_PKG_XDOMAIN_REQ:\n> >  \t\tif (*(__be32 *)(pkg->buffer + frame->size) !=\n> > crc32)\n> > {\n> >  \t\t\ttb_ctl_err(pkg->ctl,\n> >  \t\t\t\t   \"RX: checksum mismatch,\n> > dropping\n> > packet\\n\");\n> > @@ -451,8 +453,9 @@ static void tb_ctl_rx_callback(struct tb_ring\n> > *ring, struct ring_frame *frame,\n> >  \t\t}\n> >  \t\t/* Fall through */\n> >  \tcase TB_CFG_PKG_ICM_EVENT:\n> > -\t\ttb_ctl_handle_event(pkg->ctl, frame->eof, pkg,\n> > frame->size);\n> > -\t\tgoto rx;\n> > +\t\tif (tb_ctl_handle_event(pkg->ctl, frame->eof, pkg,\n> > frame->size))\n> > +\t\t\tgoto rx;\n> > +\t\tbreak;\n> >  \n> >  \tdefault:\n> >  \t\tbreak;\n> > diff --git a/drivers/thunderbolt/ctl.h b/drivers/thunderbolt/ctl.h\n> > index d0f21e1e0b8b..85c49dd301ea 100644\n> > --- a/drivers/thunderbolt/ctl.h\n> > +++ b/drivers/thunderbolt/ctl.h\n> > @@ -16,7 +16,7 @@\n> >  /* control channel */\n> >  struct tb_ctl;\n> >  \n> > -typedef void (*event_cb)(void *data, enum tb_cfg_pkg_type type,\n> > +typedef bool (*event_cb)(void *data, enum tb_cfg_pkg_type type,\n> >  \t\t\t const void *buf, size_t size);\n> >  \n> >  struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void\n> > *cb_data);\n> > diff --git a/drivers/thunderbolt/domain.c\n> > b/drivers/thunderbolt/domain.c\n> > index 9f2dcd48974d..29d6436ec8ce 100644\n> > --- a/drivers/thunderbolt/domain.c\n> > +++ b/drivers/thunderbolt/domain.c\n> > @@ -20,6 +20,98 @@\n> >  \n> >  static DEFINE_IDA(tb_domain_ida);\n> >  \n> > +static bool match_service_id(const struct tb_service_id *id,\n> > +\t\t\t     const struct tb_service *svc)\n> > +{\n> > +\tif (id->match_flags & TBSVC_MATCH_PROTOCOL_KEY) {\n> > +\t\tif (strcmp(id->protocol_key, svc->key))\n> > +\t\t\treturn false;\n> > +\t}\n> > +\n> > +\tif (id->match_flags & TBSVC_MATCH_PROTOCOL_ID) {\n> > +\t\tif (id->protocol_id != svc->prtcid)\n> > +\t\t\treturn false;\n> > +\t}\n> > +\n> > +\tif (id->match_flags & TBSVC_MATCH_PROTOCOL_VERSION) {\n> > +\t\tif (id->protocol_version != svc->prtcvers)\n> > +\t\t\treturn false;\n> > +\t}\n> > +\n> > +\tif (id->match_flags & TBSVC_MATCH_PROTOCOL_VERSION) {\n> > +\t\tif (id->protocol_revision != svc->prtcrevs)\n> > +\t\t\treturn false;\n> > +\t}\n> > +\n> > +\treturn true;\n> > +}\n> > +\n> > +static const struct tb_service_id *__tb_service_match(struct\n> > device\n> > *dev,\n> > +\t\t\t\t\t\t      struct\n> > device_driver *drv)\n> > +{\n> > +\tstruct tb_service_driver *driver;\n> > +\tconst struct tb_service_id *ids;\n> > +\tstruct tb_service *svc;\n> > +\n> > +\tsvc = tb_to_service(dev);\n> > +\tif (!svc)\n> > +\t\treturn NULL;\n> > +\n> > +\tdriver = container_of(drv, struct tb_service_driver,\n> > driver);\n> > +\tif (!driver->id_table)\n> > +\t\treturn NULL;\n> > +\n> > +\tfor (ids = driver->id_table; ids->match_flags != 0; ids++)\n> > {\n> > +\t\tif (match_service_id(ids, svc))\n> > +\t\t\treturn ids;\n> > +\t}\n> > +\n> > +\treturn NULL;\n> > +}\n> > +\n> > +static int tb_service_match(struct device *dev, struct\n> > device_driver\n> > *drv)\n> > +{\n> > +\treturn !!__tb_service_match(dev, drv);\n> > +}\n> > +\n> > +static int tb_service_probe(struct device *dev)\n> > +{\n> > +\tstruct tb_service *svc = tb_to_service(dev);\n> > +\tstruct tb_service_driver *driver;\n> > +\tconst struct tb_service_id *id;\n> > +\n> > +\tdriver = container_of(dev->driver, struct\n> > tb_service_driver,\n> > driver);\n> > +\tid = __tb_service_match(dev, &driver->driver);\n> > +\n> > +\treturn driver->probe(svc, id);\n> \n> Could you pass 'dev' to the probe function so that things like the\n> network sub-driver can sensibly link the netdev to the parent\n> hardware\n> in sysfs with SET_NETDEV_DEV()?\n\nNevermind, I'm blind, you've handled that already in patch #16.  Ignore\nme.\n\nDan\n\n\n> Dan\n> \n> > +}\n> > +\n> > +static int tb_service_remove(struct device *dev)\n> > +{\n> > +\tstruct tb_service *svc = tb_to_service(dev);\n> > +\tstruct tb_service_driver *driver;\n> > +\n> > +\tdriver = container_of(dev->driver, struct\n> > tb_service_driver,\n> > driver);\n> > +\tif (driver->remove)\n> > +\t\tdriver->remove(svc);\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +static void tb_service_shutdown(struct device *dev)\n> > +{\n> > +\tstruct tb_service_driver *driver;\n> > +\tstruct tb_service *svc;\n> > +\n> > +\tsvc = tb_to_service(dev);\n> > +\tif (!svc || !dev->driver)\n> > +\t\treturn;\n> > +\n> > +\tdriver = container_of(dev->driver, struct\n> > tb_service_driver,\n> > driver);\n> > +\tif (driver->shutdown)\n> > +\t\tdriver->shutdown(svc);\n> > +}\n> > +\n> >  static const char * const tb_security_names[] = {\n> >  \t[TB_SECURITY_NONE] = \"none\",\n> >  \t[TB_SECURITY_USER] = \"user\",\n> > @@ -52,6 +144,10 @@ static const struct attribute_group\n> > *domain_attr_groups[] = {\n> >  \n> >  struct bus_type tb_bus_type = {\n> >  \t.name = \"thunderbolt\",\n> > +\t.match = tb_service_match,\n> > +\t.probe = tb_service_probe,\n> > +\t.remove = tb_service_remove,\n> > +\t.shutdown = tb_service_shutdown,\n> >  };\n> >  \n> >  static void tb_domain_release(struct device *dev)\n> > @@ -128,17 +224,26 @@ struct tb *tb_domain_alloc(struct tb_nhi\n> > *nhi,\n> > size_t privsize)\n> >  \treturn NULL;\n> >  }\n> >  \n> > -static void tb_domain_event_cb(void *data, enum tb_cfg_pkg_type\n> > type,\n> > +static bool tb_domain_event_cb(void *data, enum tb_cfg_pkg_type\n> > type,\n> >  \t\t\t       const void *buf, size_t size)\n> >  {\n> >  \tstruct tb *tb = data;\n> >  \n> >  \tif (!tb->cm_ops->handle_event) {\n> >  \t\ttb_warn(tb, \"domain does not have event\n> > handler\\n\");\n> > -\t\treturn;\n> > +\t\treturn true;\n> >  \t}\n> >  \n> > -\ttb->cm_ops->handle_event(tb, type, buf, size);\n> > +\tswitch (type) {\n> > +\tcase TB_CFG_PKG_XDOMAIN_REQ:\n> > +\tcase TB_CFG_PKG_XDOMAIN_RESP:\n> > +\t\treturn tb_xdomain_handle_request(tb, type, buf,\n> > size);\n> > +\n> > +\tdefault:\n> > +\t\ttb->cm_ops->handle_event(tb, type, buf, size);\n> > +\t}\n> > +\n> > +\treturn true;\n> >  }\n> >  \n> >  /**\n> > @@ -443,9 +548,92 @@ int tb_domain_disconnect_pcie_paths(struct tb\n> > *tb)\n> >  \treturn tb->cm_ops->disconnect_pcie_paths(tb);\n> >  }\n> >  \n> > +/**\n> > + * tb_domain_approve_xdomain_paths() - Enable DMA paths for\n> > XDomain\n> > + * @tb: Domain enabling the DMA paths\n> > + * @xd: XDomain DMA paths are created to\n> > + *\n> > + * Calls connection manager specific method to enable DMA paths to\n> > the\n> > + * XDomain in question.\n> > + *\n> > + * Return: 0% in case of success and negative errno otherwise. In\n> > + * particular returns %-ENOTSUPP if the connection manager\n> > + * implementation does not support XDomains.\n> > + */\n> > +int tb_domain_approve_xdomain_paths(struct tb *tb, struct\n> > tb_xdomain\n> > *xd)\n> > +{\n> > +\tif (!tb->cm_ops->approve_xdomain_paths)\n> > +\t\treturn -ENOTSUPP;\n> > +\n> > +\treturn tb->cm_ops->approve_xdomain_paths(tb, xd);\n> > +}\n> > +\n> > +/**\n> > + * tb_domain_disconnect_xdomain_paths() - Disable DMA paths for\n> > XDomain\n> > + * @tb: Domain disabling the DMA paths\n> > + * @xd: XDomain whose DMA paths are disconnected\n> > + *\n> > + * Calls connection manager specific method to disconnect DMA\n> > paths\n> > to\n> > + * the XDomain in question.\n> > + *\n> > + * Return: 0% in case of success and negative errno otherwise. In\n> > + * particular returns %-ENOTSUPP if the connection manager\n> > + * implementation does not support XDomains.\n> > + */\n> > +int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct\n> > tb_xdomain *xd)\n> > +{\n> > +\tif (!tb->cm_ops->disconnect_xdomain_paths)\n> > +\t\treturn -ENOTSUPP;\n> > +\n> > +\treturn tb->cm_ops->disconnect_xdomain_paths(tb, xd);\n> > +}\n> > +\n> > +static int disconnect_xdomain(struct device *dev, void *data)\n> > +{\n> > +\tstruct tb_xdomain *xd;\n> > +\tstruct tb *tb = data;\n> > +\tint ret = 0;\n> > +\n> > +\txd = tb_to_xdomain(dev);\n> > +\tif (xd && xd->tb == tb)\n> > +\t\tret = tb_xdomain_disable_paths(xd);\n> > +\n> > +\treturn ret;\n> > +}\n> > +\n> > +/**\n> > + * tb_domain_disconnect_all_paths() - Disconnect all paths for the\n> > domain\n> > + * @tb: Domain whose paths are disconnected\n> > + *\n> > + * This function can be used to disconnect all paths (PCIe,\n> > XDomain)\n> > for\n> > + * example in preparation for host NVM firmware upgrade. After\n> > this\n> > is\n> > + * called the paths cannot be established without reseting the\n> > switch.\n> > + *\n> > + * Return: %0 in case of success and negative errno otherwise.\n> > + */\n> > +int tb_domain_disconnect_all_paths(struct tb *tb)\n> > +{\n> > +\tint ret;\n> > +\n> > +\tret = tb_domain_disconnect_pcie_paths(tb);\n> > +\tif (ret)\n> > +\t\treturn ret;\n> > +\n> > +\treturn bus_for_each_dev(&tb_bus_type, NULL, tb,\n> > disconnect_xdomain);\n> > +}\n> > +\n> >  int tb_domain_init(void)\n> >  {\n> > -\treturn bus_register(&tb_bus_type);\n> > +\tint ret;\n> > +\n> > +\tret = tb_xdomain_init();\n> > +\tif (ret)\n> > +\t\treturn ret;\n> > +\tret = bus_register(&tb_bus_type);\n> > +\tif (ret)\n> > +\t\ttb_xdomain_exit();\n> > +\n> > +\treturn ret;\n> >  }\n> >  \n> >  void tb_domain_exit(void)\n> > @@ -453,4 +641,5 @@ void tb_domain_exit(void)\n> >  \tbus_unregister(&tb_bus_type);\n> >  \tida_destroy(&tb_domain_ida);\n> >  \ttb_switch_exit();\n> > +\ttb_xdomain_exit();\n> >  }\n> > diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c\n> > index 8c22b91ed040..ab02d13f40b7 100644\n> > --- a/drivers/thunderbolt/icm.c\n> > +++ b/drivers/thunderbolt/icm.c\n> > @@ -60,6 +60,8 @@\n> >   * @get_route: Find a route string for given switch\n> >   * @device_connected: Handle device connected ICM message\n> >   * @device_disconnected: Handle device disconnected ICM message\n> > + * @xdomain_connected - Handle XDomain connected ICM message\n> > + * @xdomain_disconnected - Handle XDomain disconnected ICM message\n> >   */\n> >  struct icm {\n> >  \tstruct mutex request_lock;\n> > @@ -74,6 +76,10 @@ struct icm {\n> >  \t\t\t\t const struct icm_pkg_header\n> > *hdr);\n> >  \tvoid (*device_disconnected)(struct tb *tb,\n> >  \t\t\t\t    const struct icm_pkg_header\n> > *hdr);\n> > +\tvoid (*xdomain_connected)(struct tb *tb,\n> > +\t\t\t\t  const struct icm_pkg_header\n> > *hdr);\n> > +\tvoid (*xdomain_disconnected)(struct tb *tb,\n> > +\t\t\t\t     const struct icm_pkg_header\n> > *hdr);\n> >  };\n> >  \n> >  struct icm_notification {\n> > @@ -89,7 +95,10 @@ static inline struct tb *icm_to_tb(struct icm\n> > *icm)\n> >  \n> >  static inline u8 phy_port_from_route(u64 route, u8 depth)\n> >  {\n> > -\treturn tb_phy_port_from_link(route >> ((depth - 1) * 8));\n> > +\tu8 link;\n> > +\n> > +\tlink = depth ? route >> ((depth - 1) * 8) : route;\n> > +\treturn tb_phy_port_from_link(link);\n> >  }\n> >  \n> >  static inline u8 dual_link_from_link(u8 link)\n> > @@ -320,6 +329,51 @@ static int icm_fr_challenge_switch_key(struct\n> > tb\n> > *tb, struct tb_switch *sw,\n> >  \treturn 0;\n> >  }\n> >  \n> > +static int icm_fr_approve_xdomain_paths(struct tb *tb, struct\n> > tb_xdomain *xd)\n> > +{\n> > +\tstruct icm_fr_pkg_approve_xdomain_response reply;\n> > +\tstruct icm_fr_pkg_approve_xdomain request;\n> > +\tint ret;\n> > +\n> > +\tmemset(&request, 0, sizeof(request));\n> > +\trequest.hdr.code = ICM_APPROVE_XDOMAIN;\n> > +\trequest.link_info = xd->depth << ICM_LINK_INFO_DEPTH_SHIFT\n> > |\n> > xd->link;\n> > +\tmemcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd-\n> > > remote_uuid));\n> > \n> > +\n> > +\trequest.transmit_path = xd->transmit_path;\n> > +\trequest.transmit_ring = xd->transmit_ring;\n> > +\trequest.receive_path = xd->receive_path;\n> > +\trequest.receive_ring = xd->receive_ring;\n> > +\n> > +\tmemset(&reply, 0, sizeof(reply));\n> > +\tret = icm_request(tb, &request, sizeof(request), &reply,\n> > sizeof(reply),\n> > +\t\t\t  1, ICM_TIMEOUT);\n> > +\tif (ret)\n> > +\t\treturn ret;\n> > +\n> > +\tif (reply.hdr.flags & ICM_FLAGS_ERROR)\n> > +\t\treturn -EIO;\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +static int icm_fr_disconnect_xdomain_paths(struct tb *tb, struct\n> > tb_xdomain *xd)\n> > +{\n> > +\tu8 phy_port;\n> > +\tu8 cmd;\n> > +\n> > +\tphy_port = tb_phy_port_from_link(xd->link);\n> > +\tif (phy_port == 0)\n> > +\t\tcmd = NHI_MAILBOX_DISCONNECT_PA;\n> > +\telse\n> > +\t\tcmd = NHI_MAILBOX_DISCONNECT_PB;\n> > +\n> > +\tnhi_mailbox_cmd(tb->nhi, cmd, 1);\n> > +\tusleep_range(10, 50);\n> > +\tnhi_mailbox_cmd(tb->nhi, cmd, 2);\n> > +\treturn 0;\n> > +}\n> > +\n> >  static void remove_switch(struct tb_switch *sw)\n> >  {\n> >  \tstruct tb_switch *parent_sw;\n> > @@ -475,6 +529,141 @@ icm_fr_device_disconnected(struct tb *tb,\n> > const\n> > struct icm_pkg_header *hdr)\n> >  \ttb_switch_put(sw);\n> >  }\n> >  \n> > +static void remove_xdomain(struct tb_xdomain *xd)\n> > +{\n> > +\tstruct tb_switch *sw;\n> > +\n> > +\tsw = tb_to_switch(xd->dev.parent);\n> > +\ttb_port_at(xd->route, sw)->xdomain = NULL;\n> > +\ttb_xdomain_remove(xd);\n> > +}\n> > +\n> > +static void\n> > +icm_fr_xdomain_connected(struct tb *tb, const struct\n> > icm_pkg_header\n> > *hdr)\n> > +{\n> > +\tconst struct icm_fr_event_xdomain_connected *pkg =\n> > +\t\t(const struct icm_fr_event_xdomain_connected\n> > *)hdr;\n> > +\tstruct tb_xdomain *xd;\n> > +\tstruct tb_switch *sw;\n> > +\tu8 link, depth;\n> > +\tbool approved;\n> > +\tu64 route;\n> > +\n> > +\t/*\n> > +\t * After NVM upgrade adding root switch device fails\n> > because\n> > we\n> > +\t * initiated reset. During that time ICM might still send\n> > +\t * XDomain connected message which we ignore here.\n> > +\t */\n> > +\tif (!tb->root_switch)\n> > +\t\treturn;\n> > +\n> > +\tlink = pkg->link_info & ICM_LINK_INFO_LINK_MASK;\n> > +\tdepth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >>\n> > +\t\tICM_LINK_INFO_DEPTH_SHIFT;\n> > +\tapproved = pkg->link_info & ICM_LINK_INFO_APPROVED;\n> > +\n> > +\tif (link > ICM_MAX_LINK || depth > ICM_MAX_DEPTH) {\n> > +\t\ttb_warn(tb, \"invalid topology %u.%u, ignoring\\n\",\n> > link, depth);\n> > +\t\treturn;\n> > +\t}\n> > +\n> > +\troute = get_route(pkg->local_route_hi, pkg-\n> > >local_route_lo);\n> > +\n> > +\txd = tb_xdomain_find_by_uuid(tb, &pkg->remote_uuid);\n> > +\tif (xd) {\n> > +\t\tu8 xd_phy_port, phy_port;\n> > +\n> > +\t\txd_phy_port = phy_port_from_route(xd->route, xd-\n> > > depth);\n> > \n> > +\t\tphy_port = phy_port_from_route(route, depth);\n> > +\n> > +\t\tif (xd->depth == depth && xd_phy_port == phy_port)\n> > {\n> > +\t\t\txd->link = link;\n> > +\t\t\txd->route = route;\n> > +\t\t\txd->is_unplugged = false;\n> > +\t\t\ttb_xdomain_put(xd);\n> > +\t\t\treturn;\n> > +\t\t}\n> > +\n> > +\t\t/*\n> > +\t\t * If we find an existing XDomain connection\n> > remove\n> > it\n> > +\t\t * now. We need to go through login handshake and\n> > +\t\t * everything anyway to be able to re-establish\n> > the\n> > +\t\t * connection.\n> > +\t\t */\n> > +\t\tremove_xdomain(xd);\n> > +\t\ttb_xdomain_put(xd);\n> > +\t}\n> > +\n> > +\t/*\n> > +\t * Look if there already exists an XDomain in the same\n> > place\n> > +\t * than the new one and in that case remove it because it\n> > is\n> > +\t * most likely another host that got disconnected.\n> > +\t */\n> > +\txd = tb_xdomain_find_by_link_depth(tb, link, depth);\n> > +\tif (!xd) {\n> > +\t\tu8 dual_link;\n> > +\n> > +\t\tdual_link = dual_link_from_link(link);\n> > +\t\tif (dual_link)\n> > +\t\t\txd = tb_xdomain_find_by_link_depth(tb,\n> > dual_link,\n> > +\t\t\t\t\t\t\t   depth);\n> > +\t}\n> > +\tif (xd) {\n> > +\t\tremove_xdomain(xd);\n> > +\t\ttb_xdomain_put(xd);\n> > +\t}\n> > +\n> > +\t/*\n> > +\t * If the user disconnected a switch during suspend and\n> > +\t * connected another host to the same port, remove the\n> > switch\n> > +\t * first.\n> > +\t */\n> > +\tsw = get_switch_at_route(tb->root_switch, route);\n> > +\tif (sw)\n> > +\t\tremove_switch(sw);\n> > +\n> > +\tsw = tb_switch_find_by_link_depth(tb, link, depth);\n> > +\tif (!sw) {\n> > +\t\ttb_warn(tb, \"no switch exists at %u.%u,\n> > ignoring\\n\",\n> > link,\n> > +\t\t\tdepth);\n> > +\t\treturn;\n> > +\t}\n> > +\n> > +\txd = tb_xdomain_alloc(sw->tb, &sw->dev, route,\n> > +\t\t\t      &pkg->local_uuid, &pkg-\n> > >remote_uuid);\n> > +\tif (!xd) {\n> > +\t\ttb_switch_put(sw);\n> > +\t\treturn;\n> > +\t}\n> > +\n> > +\txd->link = link;\n> > +\txd->depth = depth;\n> > +\n> > +\ttb_port_at(route, sw)->xdomain = xd;\n> > +\n> > +\ttb_xdomain_add(xd);\n> > +\ttb_switch_put(sw);\n> > +}\n> > +\n> > +static void\n> > +icm_fr_xdomain_disconnected(struct tb *tb, const struct\n> > icm_pkg_header *hdr)\n> > +{\n> > +\tconst struct icm_fr_event_xdomain_disconnected *pkg =\n> > +\t\t(const struct icm_fr_event_xdomain_disconnected\n> > *)hdr;\n> > +\tstruct tb_xdomain *xd;\n> > +\n> > +\t/*\n> > +\t * If the connection is through one or multiple devices,\n> > the\n> > +\t * XDomain device is removed along with them so it is fine\n> > if we\n> > +\t * cannot find it here.\n> > +\t */\n> > +\txd = tb_xdomain_find_by_uuid(tb, &pkg->remote_uuid);\n> > +\tif (xd) {\n> > +\t\tremove_xdomain(xd);\n> > +\t\ttb_xdomain_put(xd);\n> > +\t}\n> > +}\n> > +\n> >  static struct pci_dev *get_upstream_port(struct pci_dev *pdev)\n> >  {\n> >  \tstruct pci_dev *parent;\n> > @@ -594,6 +783,12 @@ static void icm_handle_notification(struct\n> > work_struct *work)\n> >  \tcase ICM_EVENT_DEVICE_DISCONNECTED:\n> >  \t\ticm->device_disconnected(tb, n->pkg);\n> >  \t\tbreak;\n> > +\tcase ICM_EVENT_XDOMAIN_CONNECTED:\n> > +\t\ticm->xdomain_connected(tb, n->pkg);\n> > +\t\tbreak;\n> > +\tcase ICM_EVENT_XDOMAIN_DISCONNECTED:\n> > +\t\ticm->xdomain_disconnected(tb, n->pkg);\n> > +\t\tbreak;\n> >  \t}\n> >  \n> >  \tmutex_unlock(&tb->lock);\n> > @@ -927,6 +1122,10 @@ static void icm_unplug_children(struct\n> > tb_switch *sw)\n> >  \n> >  \t\tif (tb_is_upstream_port(port))\n> >  \t\t\tcontinue;\n> > +\t\tif (port->xdomain) {\n> > +\t\t\tport->xdomain->is_unplugged = true;\n> > +\t\t\tcontinue;\n> > +\t\t}\n> >  \t\tif (!port->remote)\n> >  \t\t\tcontinue;\n> >  \n> > @@ -943,6 +1142,13 @@ static void\n> > icm_free_unplugged_children(struct\n> > tb_switch *sw)\n> >  \n> >  \t\tif (tb_is_upstream_port(port))\n> >  \t\t\tcontinue;\n> > +\n> > +\t\tif (port->xdomain && port->xdomain->is_unplugged)\n> > {\n> > +\t\t\ttb_xdomain_remove(port->xdomain);\n> > +\t\t\tport->xdomain = NULL;\n> > +\t\t\tcontinue;\n> > +\t\t}\n> > +\n> >  \t\tif (!port->remote)\n> >  \t\t\tcontinue;\n> >  \n> > @@ -1009,8 +1215,10 @@ static int icm_start(struct tb *tb)\n> >  \ttb->root_switch->no_nvm_upgrade = x86_apple_machine;\n> >  \n> >  \tret = tb_switch_add(tb->root_switch);\n> > -\tif (ret)\n> > +\tif (ret) {\n> >  \t\ttb_switch_put(tb->root_switch);\n> > +\t\ttb->root_switch = NULL;\n> > +\t}\n> >  \n> >  \treturn ret;\n> >  }\n> > @@ -1042,6 +1250,8 @@ static const struct tb_cm_ops icm_fr_ops = {\n> >  \t.add_switch_key = icm_fr_add_switch_key,\n> >  \t.challenge_switch_key = icm_fr_challenge_switch_key,\n> >  \t.disconnect_pcie_paths = icm_disconnect_pcie_paths,\n> > +\t.approve_xdomain_paths = icm_fr_approve_xdomain_paths,\n> > +\t.disconnect_xdomain_paths =\n> > icm_fr_disconnect_xdomain_paths,\n> >  };\n> >  \n> >  struct tb *icm_probe(struct tb_nhi *nhi)\n> > @@ -1064,6 +1274,8 @@ struct tb *icm_probe(struct tb_nhi *nhi)\n> >  \t\ticm->get_route = icm_fr_get_route;\n> >  \t\ticm->device_connected = icm_fr_device_connected;\n> >  \t\ticm->device_disconnected =\n> > icm_fr_device_disconnected;\n> > +\t\ticm->xdomain_connected = icm_fr_xdomain_connected;\n> > +\t\ticm->xdomain_disconnected =\n> > icm_fr_xdomain_disconnected;\n> >  \t\ttb->cm_ops = &icm_fr_ops;\n> >  \t\tbreak;\n> >  \n> > @@ -1077,6 +1289,8 @@ struct tb *icm_probe(struct tb_nhi *nhi)\n> >  \t\ticm->get_route = icm_ar_get_route;\n> >  \t\ticm->device_connected = icm_fr_device_connected;\n> >  \t\ticm->device_disconnected =\n> > icm_fr_device_disconnected;\n> > +\t\ticm->xdomain_connected = icm_fr_xdomain_connected;\n> > +\t\ticm->xdomain_disconnected =\n> > icm_fr_xdomain_disconnected;\n> >  \t\ttb->cm_ops = &icm_fr_ops;\n> >  \t\tbreak;\n> >  \t}\n> > diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h\n> > index 5b5bb2c436be..0e05828983db 100644\n> > --- a/drivers/thunderbolt/nhi.h\n> > +++ b/drivers/thunderbolt/nhi.h\n> > @@ -157,6 +157,8 @@ enum nhi_mailbox_cmd {\n> >  \tNHI_MAILBOX_SAVE_DEVS = 0x05,\n> >  \tNHI_MAILBOX_DISCONNECT_PCIE_PATHS = 0x06,\n> >  \tNHI_MAILBOX_DRV_UNLOADS = 0x07,\n> > +\tNHI_MAILBOX_DISCONNECT_PA = 0x10,\n> > +\tNHI_MAILBOX_DISCONNECT_PB = 0x11,\n> >  \tNHI_MAILBOX_ALLOW_ALL_DEVS = 0x23,\n> >  };\n> >  \n> > diff --git a/drivers/thunderbolt/switch.c\n> > b/drivers/thunderbolt/switch.c\n> > index 53f40c57df59..dfc357d33e1e 100644\n> > --- a/drivers/thunderbolt/switch.c\n> > +++ b/drivers/thunderbolt/switch.c\n> > @@ -171,11 +171,11 @@ static int nvm_authenticate_host(struct\n> > tb_switch *sw)\n> >  \n> >  \t/*\n> >  \t * Root switch NVM upgrade requires that we disconnect the\n> > -\t * existing PCIe paths first (in case it is not in safe\n> > mode\n> > +\t * existing paths first (in case it is not in safe mode\n> >  \t * already).\n> >  \t */\n> >  \tif (!sw->safe_mode) {\n> > -\t\tret = tb_domain_disconnect_pcie_paths(sw->tb);\n> > +\t\tret = tb_domain_disconnect_all_paths(sw->tb);\n> >  \t\tif (ret)\n> >  \t\t\treturn ret;\n> >  \t\t/*\n> > @@ -1363,6 +1363,9 @@ void tb_switch_remove(struct tb_switch *sw)\n> >  \t\tif (sw->ports[i].remote)\n> >  \t\t\ttb_switch_remove(sw->ports[i].remote->sw);\n> >  \t\tsw->ports[i].remote = NULL;\n> > +\t\tif (sw->ports[i].xdomain)\n> > +\t\t\ttb_xdomain_remove(sw->ports[i].xdomain);\n> > +\t\tsw->ports[i].xdomain = NULL;\n> >  \t}\n> >  \n> >  \tif (!sw->is_unplugged)\n> > diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h\n> > index ea21d927bd09..74af9d4929ab 100644\n> > --- a/drivers/thunderbolt/tb.h\n> > +++ b/drivers/thunderbolt/tb.h\n> > @@ -9,6 +9,7 @@\n> >  \n> >  #include <linux/nvmem-provider.h>\n> >  #include <linux/pci.h>\n> > +#include <linux/thunderbolt.h>\n> >  #include <linux/uuid.h>\n> >  \n> >  #include \"tb_regs.h\"\n> > @@ -109,14 +110,25 @@ struct tb_switch {\n> >  \n> >  /**\n> >   * struct tb_port - a thunderbolt port, part of a tb_switch\n> > + * @config: Cached port configuration read from registers\n> > + * @sw: Switch the port belongs to\n> > + * @remote: Remote port (%NULL if not connected)\n> > + * @xdomain: Remote host (%NULL if not connected)\n> > + * @cap_phy: Offset, zero if not found\n> > + * @port: Port number on switch\n> > + * @disabled: Disabled by eeprom\n> > + * @dual_link_port: If the switch is connected using two ports,\n> > points\n> > + *\t\t    to the other port.\n> > + * @link_nr: Is this primary or secondary port on the dual_link.\n> >   */\n> >  struct tb_port {\n> >  \tstruct tb_regs_port_header config;\n> >  \tstruct tb_switch *sw;\n> > -\tstruct tb_port *remote; /* remote port, NULL if not\n> > connected */\n> > -\tint cap_phy; /* offset, zero if not found */\n> > -\tu8 port; /* port number on switch */\n> > -\tbool disabled; /* disabled by eeprom */\n> > +\tstruct tb_port *remote;\n> > +\tstruct tb_xdomain *xdomain;\n> > +\tint cap_phy;\n> > +\tu8 port;\n> > +\tbool disabled;\n> >  \tstruct tb_port *dual_link_port;\n> >  \tu8 link_nr:1;\n> >  };\n> > @@ -189,6 +201,8 @@ struct tb_path {\n> >   * @add_switch_key: Add key to switch\n> >   * @challenge_switch_key: Challenge switch using key\n> >   * @disconnect_pcie_paths: Disconnects PCIe paths before NVM\n> > update\n> > + * @approve_xdomain_paths: Approve (establish) XDomain DMA paths\n> > + * @disconnect_xdomain_paths: Disconnect XDomain DMA paths\n> >   */\n> >  struct tb_cm_ops {\n> >  \tint (*driver_ready)(struct tb *tb);\n> > @@ -205,6 +219,8 @@ struct tb_cm_ops {\n> >  \tint (*challenge_switch_key)(struct tb *tb, struct\n> > tb_switch\n> > *sw,\n> >  \t\t\t\t    const u8 *challenge, u8\n> > *response);\n> >  \tint (*disconnect_pcie_paths)(struct tb *tb);\n> > +\tint (*approve_xdomain_paths)(struct tb *tb, struct\n> > tb_xdomain *xd);\n> > +\tint (*disconnect_xdomain_paths)(struct tb *tb, struct\n> > tb_xdomain *xd);\n> >  };\n> >  \n> >  static inline void *tb_priv(struct tb *tb)\n> > @@ -331,6 +347,8 @@ extern struct device_type tb_switch_type;\n> >  int tb_domain_init(void);\n> >  void tb_domain_exit(void);\n> >  void tb_switch_exit(void);\n> > +int tb_xdomain_init(void);\n> > +void tb_xdomain_exit(void);\n> >  \n> >  struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize);\n> >  int tb_domain_add(struct tb *tb);\n> > @@ -343,6 +361,9 @@ int tb_domain_approve_switch(struct tb *tb,\n> > struct tb_switch *sw);\n> >  int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch\n> > *sw);\n> >  int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch\n> > *sw);\n> >  int tb_domain_disconnect_pcie_paths(struct tb *tb);\n> > +int tb_domain_approve_xdomain_paths(struct tb *tb, struct\n> > tb_xdomain\n> > *xd);\n> > +int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct\n> > tb_xdomain *xd);\n> > +int tb_domain_disconnect_all_paths(struct tb *tb);\n> >  \n> >  static inline void tb_domain_put(struct tb *tb)\n> >  {\n> > @@ -422,4 +443,14 @@ static inline u64 tb_downstream_route(struct\n> > tb_port *port)\n> >  \t       | ((u64) port->port << (port->sw->config.depth *\n> > 8));\n> >  }\n> >  \n> > +bool tb_xdomain_handle_request(struct tb *tb, enum tb_cfg_pkg_type\n> > type,\n> > +\t\t\t       const void *buf, size_t size);\n> > +struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device\n> > *parent,\n> > +\t\t\t\t    u64 route, const uuid_t\n> > *local_uuid,\n> > +\t\t\t\t    const uuid_t *remote_uuid);\n> > +void tb_xdomain_add(struct tb_xdomain *xd);\n> > +void tb_xdomain_remove(struct tb_xdomain *xd);\n> > +struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8\n> > link,\n> > +\t\t\t\t\t\t u8 depth);\n> > +\n> >  #endif\n> > diff --git a/drivers/thunderbolt/tb_msgs.h\n> > b/drivers/thunderbolt/tb_msgs.h\n> > index fe3039b05da6..2a76908537a6 100644\n> > --- a/drivers/thunderbolt/tb_msgs.h\n> > +++ b/drivers/thunderbolt/tb_msgs.h\n> > @@ -101,11 +101,14 @@ enum icm_pkg_code {\n> >  \tICM_CHALLENGE_DEVICE = 0x5,\n> >  \tICM_ADD_DEVICE_KEY = 0x6,\n> >  \tICM_GET_ROUTE = 0xa,\n> > +\tICM_APPROVE_XDOMAIN = 0x10,\n> >  };\n> >  \n> >  enum icm_event_code {\n> >  \tICM_EVENT_DEVICE_CONNECTED = 3,\n> >  \tICM_EVENT_DEVICE_DISCONNECTED = 4,\n> > +\tICM_EVENT_XDOMAIN_CONNECTED = 6,\n> > +\tICM_EVENT_XDOMAIN_DISCONNECTED = 7,\n> >  };\n> >  \n> >  struct icm_pkg_header {\n> > @@ -188,6 +191,25 @@ struct icm_fr_event_device_disconnected {\n> >  \tu16 link_info;\n> >  } __packed;\n> >  \n> > +struct icm_fr_event_xdomain_connected {\n> > +\tstruct icm_pkg_header hdr;\n> > +\tu16 reserved;\n> > +\tu16 link_info;\n> > +\tuuid_t remote_uuid;\n> > +\tuuid_t local_uuid;\n> > +\tu32 local_route_hi;\n> > +\tu32 local_route_lo;\n> > +\tu32 remote_route_hi;\n> > +\tu32 remote_route_lo;\n> > +} __packed;\n> > +\n> > +struct icm_fr_event_xdomain_disconnected {\n> > +\tstruct icm_pkg_header hdr;\n> > +\tu16 reserved;\n> > +\tu16 link_info;\n> > +\tuuid_t remote_uuid;\n> > +} __packed;\n> > +\n> >  struct icm_fr_pkg_add_device_key {\n> >  \tstruct icm_pkg_header hdr;\n> >  \tuuid_t ep_uuid;\n> > @@ -224,6 +246,28 @@ struct icm_fr_pkg_challenge_device_response {\n> >  \tu32 response[8];\n> >  } __packed;\n> >  \n> > +struct icm_fr_pkg_approve_xdomain {\n> > +\tstruct icm_pkg_header hdr;\n> > +\tu16 reserved;\n> > +\tu16 link_info;\n> > +\tuuid_t remote_uuid;\n> > +\tu16 transmit_path;\n> > +\tu16 transmit_ring;\n> > +\tu16 receive_path;\n> > +\tu16 receive_ring;\n> > +} __packed;\n> > +\n> > +struct icm_fr_pkg_approve_xdomain_response {\n> > +\tstruct icm_pkg_header hdr;\n> > +\tu16 reserved;\n> > +\tu16 link_info;\n> > +\tuuid_t remote_uuid;\n> > +\tu16 transmit_path;\n> > +\tu16 transmit_ring;\n> > +\tu16 receive_path;\n> > +\tu16 receive_ring;\n> > +} __packed;\n> > +\n> >  /* Alpine Ridge only messages */\n> >  \n> >  struct icm_ar_pkg_get_route {\n> > @@ -240,4 +284,83 @@ struct icm_ar_pkg_get_route_response {\n> >  \tu32 route_lo;\n> >  } __packed;\n> >  \n> > +/* XDomain messages */\n> > +\n> > +struct tb_xdomain_header {\n> > +\tu32 route_hi;\n> > +\tu32 route_lo;\n> > +\tu32 length_sn;\n> > +} __packed;\n> > +\n> > +#define TB_XDOMAIN_LENGTH_MASK\tGENMASK(5, 0)\n> > +#define TB_XDOMAIN_SN_MASK\tGENMASK(28, 27)\n> > +#define TB_XDOMAIN_SN_SHIFT\t27\n> > +\n> > +enum tb_xdp_type {\n> > +\tUUID_REQUEST_OLD = 1,\n> > +\tUUID_RESPONSE = 2,\n> > +\tPROPERTIES_REQUEST,\n> > +\tPROPERTIES_RESPONSE,\n> > +\tPROPERTIES_CHANGED_REQUEST,\n> > +\tPROPERTIES_CHANGED_RESPONSE,\n> > +\tERROR_RESPONSE,\n> > +\tUUID_REQUEST = 12,\n> > +};\n> > +\n> > +struct tb_xdp_header {\n> > +\tstruct tb_xdomain_header xd_hdr;\n> > +\tuuid_t uuid;\n> > +\tu32 type;\n> > +} __packed;\n> > +\n> > +struct tb_xdp_properties {\n> > +\tstruct tb_xdp_header hdr;\n> > +\tuuid_t src_uuid;\n> > +\tuuid_t dst_uuid;\n> > +\tu16 offset;\n> > +\tu16 reserved;\n> > +} __packed;\n> > +\n> > +struct tb_xdp_properties_response {\n> > +\tstruct tb_xdp_header hdr;\n> > +\tuuid_t src_uuid;\n> > +\tuuid_t dst_uuid;\n> > +\tu16 offset;\n> > +\tu16 data_length;\n> > +\tu32 generation;\n> > +\tu32 data[0];\n> > +} __packed;\n> > +\n> > +/*\n> > + * Max length of data array single XDomain property response is\n> > allowed\n> > + * to carry.\n> > + */\n> > +#define TB_XDP_PROPERTIES_MAX_DATA_LENGTH\t\\\n> > +\t(((256 - 4 - sizeof(struct tb_xdp_properties_response))) /\n> > 4)\n> > +\n> > +/* Maximum size of the total property block in dwords we allow */\n> > +#define TB_XDP_PROPERTIES_MAX_LENGTH\t\t500\n> > +\n> > +struct tb_xdp_properties_changed {\n> > +\tstruct tb_xdp_header hdr;\n> > +\tuuid_t src_uuid;\n> > +} __packed;\n> > +\n> > +struct tb_xdp_properties_changed_response {\n> > +\tstruct tb_xdp_header hdr;\n> > +} __packed;\n> > +\n> > +enum tb_xdp_error {\n> > +\tERROR_SUCCESS,\n> > +\tERROR_UNKNOWN_PACKET,\n> > +\tERROR_UNKNOWN_DOMAIN,\n> > +\tERROR_NOT_SUPPORTED,\n> > +\tERROR_NOT_READY,\n> > +};\n> > +\n> > +struct tb_xdp_error_response {\n> > +\tstruct tb_xdp_header hdr;\n> > +\tu32 error;\n> > +} __packed;\n> > +\n> >  #endif\n> > diff --git a/drivers/thunderbolt/xdomain.c\n> > b/drivers/thunderbolt/xdomain.c\n> > new file mode 100644\n> > index 000000000000..1b929be8fdd6\n> > --- /dev/null\n> > +++ b/drivers/thunderbolt/xdomain.c\n> > @@ -0,0 +1,1576 @@\n> > +/*\n> > + * Thunderbolt XDomain discovery protocol support\n> > + *\n> > + * Copyright (C) 2017, Intel Corporation\n> > + * Authors: Michael Jamet <michael.jamet@intel.com>\n> > + *          Mika Westerberg <mika.westerberg@linux.intel.com>\n> > + *\n> > + * This program is free software; you can redistribute it and/or\n> > modify\n> > + * it under the terms of the GNU General Public License version 2\n> > as\n> > + * published by the Free Software Foundation.\n> > + */\n> > +\n> > +#include <linux/device.h>\n> > +#include <linux/kmod.h>\n> > +#include <linux/module.h>\n> > +#include <linux/utsname.h>\n> > +#include <linux/uuid.h>\n> > +#include <linux/workqueue.h>\n> > +\n> > +#include \"tb.h\"\n> > +\n> > +#define XDOMAIN_DEFAULT_TIMEOUT\t\t\t5000 /* ms\n> > */\n> > +#define XDOMAIN_PROPERTIES_RETRIES\t\t60\n> > +#define XDOMAIN_PROPERTIES_CHANGED_RETRIES\t10\n> > +\n> > +struct xdomain_request_work {\n> > +\tstruct work_struct work;\n> > +\tstruct tb_xdp_header *pkg;\n> > +\tstruct tb *tb;\n> > +};\n> > +\n> > +/* Serializes access to the properties and protocol handlers below\n> > */\n> > +static DEFINE_MUTEX(xdomain_lock);\n> > +\n> > +/* Properties exposed to the remote domains */\n> > +static struct tb_property_dir *xdomain_property_dir;\n> > +static u32 *xdomain_property_block;\n> > +static u32 xdomain_property_block_len;\n> > +static u32 xdomain_property_block_gen;\n> > +\n> > +/* Additional protocol handlers */\n> > +static LIST_HEAD(protocol_handlers);\n> > +\n> > +/* UUID for XDomain discovery protocol */\n> > +static const uuid_t tb_xdp_uuid =\n> > +\tUUID_INIT(0xb638d70e, 0x42ff, 0x40bb,\n> > +\t\t  0x97, 0xc2, 0x90, 0xe2, 0xc0, 0xb2, 0xff, 0x07);\n> > +\n> > +static bool tb_xdomain_match(const struct tb_cfg_request *req,\n> > +\t\t\t     const struct ctl_pkg *pkg)\n> > +{\n> > +\tswitch (pkg->frame.eof) {\n> > +\tcase TB_CFG_PKG_ERROR:\n> > +\t\treturn true;\n> > +\n> > +\tcase TB_CFG_PKG_XDOMAIN_RESP: {\n> > +\t\tconst struct tb_xdp_header *res_hdr = pkg->buffer;\n> > +\t\tconst struct tb_xdp_header *req_hdr = req-\n> > >request;\n> > +\t\tu8 req_seq, res_seq;\n> > +\n> > +\t\tif (pkg->frame.size < req->response_size / 4)\n> > +\t\t\treturn false;\n> > +\n> > +\t\t/* Make sure route matches */\n> > +\t\tif ((res_hdr->xd_hdr.route_hi & ~BIT(31)) !=\n> > +\t\t     req_hdr->xd_hdr.route_hi)\n> > +\t\t\treturn false;\n> > +\t\tif ((res_hdr->xd_hdr.route_lo) != req_hdr-\n> > > xd_hdr.route_lo)\n> > \n> > +\t\t\treturn false;\n> > +\n> > +\t\t/* Then check that the sequence number matches */\n> > +\t\tres_seq = res_hdr->xd_hdr.length_sn &\n> > TB_XDOMAIN_SN_MASK;\n> > +\t\tres_seq >>= TB_XDOMAIN_SN_SHIFT;\n> > +\t\treq_seq = req_hdr->xd_hdr.length_sn &\n> > TB_XDOMAIN_SN_MASK;\n> > +\t\treq_seq >>= TB_XDOMAIN_SN_SHIFT;\n> > +\t\tif (res_seq != req_seq)\n> > +\t\t\treturn false;\n> > +\n> > +\t\t/* Check that the XDomain protocol matches */\n> > +\t\tif (!uuid_equal(&res_hdr->uuid, &req_hdr->uuid))\n> > +\t\t\treturn false;\n> > +\n> > +\t\treturn true;\n> > +\t}\n> > +\n> > +\tdefault:\n> > +\t\treturn false;\n> > +\t}\n> > +}\n> > +\n> > +static bool tb_xdomain_copy(struct tb_cfg_request *req,\n> > +\t\t\t    const struct ctl_pkg *pkg)\n> > +{\n> > +\tmemcpy(req->response, pkg->buffer, req->response_size);\n> > +\treq->result.err = 0;\n> > +\treturn true;\n> > +}\n> > +\n> > +static void response_ready(void *data)\n> > +{\n> > +\ttb_cfg_request_put(data);\n> > +}\n> > +\n> > +static int __tb_xdomain_response(struct tb_ctl *ctl, const void\n> > *response,\n> > +\t\t\t\t size_t size, enum tb_cfg_pkg_type\n> > type)\n> > +{\n> > +\tstruct tb_cfg_request *req;\n> > +\n> > +\treq = tb_cfg_request_alloc();\n> > +\tif (!req)\n> > +\t\treturn -ENOMEM;\n> > +\n> > +\treq->match = tb_xdomain_match;\n> > +\treq->copy = tb_xdomain_copy;\n> > +\treq->request = response;\n> > +\treq->request_size = size;\n> > +\treq->request_type = type;\n> > +\n> > +\treturn tb_cfg_request(ctl, req, response_ready, req);\n> > +}\n> > +\n> > +/**\n> > + * tb_xdomain_response() - Send a XDomain response message\n> > + * @xd: XDomain to send the message\n> > + * @response: Response to send\n> > + * @size: Size of the response\n> > + * @type: PDF type of the response\n> > + *\n> > + * This can be used to send a XDomain response message to the\n> > other\n> > + * domain. No response for the message is expected.\n> > + *\n> > + * Return: %0 in case of success and negative errno in case of\n> > failure\n> > + */\n> > +int tb_xdomain_response(struct tb_xdomain *xd, const void\n> > *response,\n> > +\t\t\tsize_t size, enum tb_cfg_pkg_type type)\n> > +{\n> > +\treturn __tb_xdomain_response(xd->tb->ctl, response, size,\n> > type);\n> > +}\n> > +EXPORT_SYMBOL_GPL(tb_xdomain_response);\n> > +\n> > +static int __tb_xdomain_request(struct tb_ctl *ctl, const void\n> > *request,\n> > +\tsize_t request_size, enum tb_cfg_pkg_type request_type,\n> > void\n> > *response,\n> > +\tsize_t response_size, enum tb_cfg_pkg_type response_type,\n> > +\tunsigned int timeout_msec)\n> > +{\n> > +\tstruct tb_cfg_request *req;\n> > +\tstruct tb_cfg_result res;\n> > +\n> > +\treq = tb_cfg_request_alloc();\n> > +\tif (!req)\n> > +\t\treturn -ENOMEM;\n> > +\n> > +\treq->match = tb_xdomain_match;\n> > +\treq->copy = tb_xdomain_copy;\n> > +\treq->request = request;\n> > +\treq->request_size = request_size;\n> > +\treq->request_type = request_type;\n> > +\treq->response = response;\n> > +\treq->response_size = response_size;\n> > +\treq->response_type = response_type;\n> > +\n> > +\tres = tb_cfg_request_sync(ctl, req, timeout_msec);\n> > +\n> > +\ttb_cfg_request_put(req);\n> > +\n> > +\treturn res.err == 1 ? -EIO : res.err;\n> > +}\n> > +\n> > +/**\n> > + * tb_xdomain_request() - Send a XDomain request\n> > + * @xd: XDomain to send the request\n> > + * @request: Request to send\n> > + * @request_size: Size of the request in bytes\n> > + * @request_type: PDF type of the request\n> > + * @response: Response is copied here\n> > + * @response_size: Expected size of the response in bytes\n> > + * @response_type: Expected PDF type of the response\n> > + * @timeout_msec: Timeout in milliseconds to wait for the response\n> > + *\n> > + * This function can be used to send XDomain control channel\n> > messages to\n> > + * the other domain. The function waits until the response is\n> > received\n> > + * or when timeout triggers. Whichever comes first.\n> > + *\n> > + * Return: %0 in case of success and negative errno in case of\n> > failure\n> > + */\n> > +int tb_xdomain_request(struct tb_xdomain *xd, const void *request,\n> > +\tsize_t request_size, enum tb_cfg_pkg_type request_type,\n> > +\tvoid *response, size_t response_size,\n> > +\tenum tb_cfg_pkg_type response_type, unsigned int\n> > timeout_msec)\n> > +{\n> > +\treturn __tb_xdomain_request(xd->tb->ctl, request,\n> > request_size,\n> > +\t\t\t\t    request_type, response,\n> > response_size,\n> > +\t\t\t\t    response_type, timeout_msec);\n> > +}\n> > +EXPORT_SYMBOL_GPL(tb_xdomain_request);\n> > +\n> > +static inline void tb_xdp_fill_header(struct tb_xdp_header *hdr,\n> > u64\n> > route,\n> > +\tu8 sequence, enum tb_xdp_type type, size_t size)\n> > +{\n> > +\tu32 length_sn;\n> > +\n> > +\tlength_sn = (size - sizeof(hdr->xd_hdr)) / 4;\n> > +\tlength_sn |= (sequence << TB_XDOMAIN_SN_SHIFT) &\n> > TB_XDOMAIN_SN_MASK;\n> > +\n> > +\thdr->xd_hdr.route_hi = upper_32_bits(route);\n> > +\thdr->xd_hdr.route_lo = lower_32_bits(route);\n> > +\thdr->xd_hdr.length_sn = length_sn;\n> > +\thdr->type = type;\n> > +\tmemcpy(&hdr->uuid, &tb_xdp_uuid, sizeof(tb_xdp_uuid));\n> > +}\n> > +\n> > +static int tb_xdp_handle_error(const struct tb_xdp_header *hdr)\n> > +{\n> > +\tconst struct tb_xdp_error_response *error;\n> > +\n> > +\tif (hdr->type != ERROR_RESPONSE)\n> > +\t\treturn 0;\n> > +\n> > +\terror = (const struct tb_xdp_error_response *)hdr;\n> > +\n> > +\tswitch (error->error) {\n> > +\tcase ERROR_UNKNOWN_PACKET:\n> > +\tcase ERROR_UNKNOWN_DOMAIN:\n> > +\t\treturn -EIO;\n> > +\tcase ERROR_NOT_SUPPORTED:\n> > +\t\treturn -ENOTSUPP;\n> > +\tcase ERROR_NOT_READY:\n> > +\t\treturn -EAGAIN;\n> > +\tdefault:\n> > +\t\tbreak;\n> > +\t}\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +static int tb_xdp_error_response(struct tb_ctl *ctl, u64 route, u8\n> > sequence,\n> > +\t\t\t\t enum tb_xdp_error error)\n> > +{\n> > +\tstruct tb_xdp_error_response res;\n> > +\n> > +\tmemset(&res, 0, sizeof(res));\n> > +\ttb_xdp_fill_header(&res.hdr, route, sequence,\n> > ERROR_RESPONSE,\n> > +\t\t\t   sizeof(res));\n> > +\tres.error = error;\n> > +\n> > +\treturn __tb_xdomain_response(ctl, &res, sizeof(res),\n> > +\t\t\t\t     TB_CFG_PKG_XDOMAIN_RESP);\n> > +}\n> > +\n> > +static int tb_xdp_properties_request(struct tb_ctl *ctl, u64\n> > route,\n> > +\tconst uuid_t *src_uuid, const uuid_t *dst_uuid, int retry,\n> > +\tu32 **block, u32 *generation)\n> > +{\n> > +\tstruct tb_xdp_properties_response *res;\n> > +\tstruct tb_xdp_properties req;\n> > +\tu16 data_len, len;\n> > +\tsize_t total_size;\n> > +\tu32 *data = NULL;\n> > +\tint ret;\n> > +\n> > +\ttotal_size = sizeof(*res) +\n> > TB_XDP_PROPERTIES_MAX_DATA_LENGTH * 4;\n> > +\tres = kzalloc(total_size, GFP_KERNEL);\n> > +\tif (!res)\n> > +\t\treturn -ENOMEM;\n> > +\n> > +\tmemset(&req, 0, sizeof(req));\n> > +\ttb_xdp_fill_header(&req.hdr, route, retry % 4,\n> > PROPERTIES_REQUEST,\n> > +\t\t\t   sizeof(req));\n> > +\tmemcpy(&req.src_uuid, src_uuid, sizeof(*src_uuid));\n> > +\tmemcpy(&req.dst_uuid, dst_uuid, sizeof(*dst_uuid));\n> > +\n> > +\tlen = 0;\n> > +\tdata_len = 0;\n> > +\n> > +\tdo {\n> > +\t\tret = __tb_xdomain_request(ctl, &req, sizeof(req),\n> > +\t\t\t\t\t   TB_CFG_PKG_XDOMAIN_REQ,\n> > res,\n> > +\t\t\t\t\t   total_size,\n> > TB_CFG_PKG_XDOMAIN_RESP,\n> > +\t\t\t\t\t   XDOMAIN_DEFAULT_TIMEOUT\n> > );\n> > +\t\tif (ret)\n> > +\t\t\tgoto err;\n> > +\n> > +\t\tret = tb_xdp_handle_error(&res->hdr);\n> > +\t\tif (ret)\n> > +\t\t\tgoto err;\n> > +\n> > +\t\t/*\n> > +\t\t * Package length includes the whole payload\n> > without\n> > the\n> > +\t\t * XDomain header. Validate first that the package\n> > is at\n> > +\t\t * least size of the response structure.\n> > +\t\t */\n> > +\t\tlen = res->hdr.xd_hdr.length_sn &\n> > TB_XDOMAIN_LENGTH_MASK;\n> > +\t\tif (len < sizeof(*res) / 4) {\n> > +\t\t\tret = -EINVAL;\n> > +\t\t\tgoto err;\n> > +\t\t}\n> > +\n> > +\t\tlen += sizeof(res->hdr.xd_hdr) / 4;\n> > +\t\tlen -= sizeof(*res) / 4;\n> > +\n> > +\t\tif (res->offset != req.offset) {\n> > +\t\t\tret = -EINVAL;\n> > +\t\t\tgoto err;\n> > +\t\t}\n> > +\n> > +\t\t/*\n> > +\t\t * First time allocate block that has enough space\n> > for\n> > +\t\t * the whole properties block.\n> > +\t\t */\n> > +\t\tif (!data) {\n> > +\t\t\tdata_len = res->data_length;\n> > +\t\t\tif (data_len >\n> > TB_XDP_PROPERTIES_MAX_LENGTH)\n> > {\n> > +\t\t\t\tret = -E2BIG;\n> > +\t\t\t\tgoto err;\n> > +\t\t\t}\n> > +\n> > +\t\t\tdata = kcalloc(data_len, sizeof(u32),\n> > GFP_KERNEL);\n> > +\t\t\tif (!data) {\n> > +\t\t\t\tret = -ENOMEM;\n> > +\t\t\t\tgoto err;\n> > +\t\t\t}\n> > +\t\t}\n> > +\n> > +\t\tmemcpy(data + req.offset, res->data, len * 4);\n> > +\t\treq.offset += len;\n> > +\t} while (!data_len || req.offset < data_len);\n> > +\n> > +\t*block = data;\n> > +\t*generation = res->generation;\n> > +\n> > +\tkfree(res);\n> > +\n> > +\treturn data_len;\n> > +\n> > +err:\n> > +\tkfree(data);\n> > +\tkfree(res);\n> > +\n> > +\treturn ret;\n> > +}\n> > +\n> > +static int tb_xdp_properties_response(struct tb *tb, struct tb_ctl\n> > *ctl,\n> > +\tu64 route, u8 sequence, const uuid_t *src_uuid,\n> > +\tconst struct tb_xdp_properties *req)\n> > +{\n> > +\tstruct tb_xdp_properties_response *res;\n> > +\tsize_t total_size;\n> > +\tu16 len;\n> > +\tint ret;\n> > +\n> > +\t/*\n> > +\t * Currently we expect all requests to be directed to us.\n> > The\n> > +\t * protocol supports forwarding, though which we might add\n> > +\t * support later on.\n> > +\t */\n> > +\tif (!uuid_equal(src_uuid, &req->dst_uuid)) {\n> > +\t\ttb_xdp_error_response(ctl, route, sequence,\n> > +\t\t\t\t      ERROR_UNKNOWN_DOMAIN);\n> > +\t\treturn 0;\n> > +\t}\n> > +\n> > +\tmutex_lock(&xdomain_lock);\n> > +\n> > +\tif (req->offset >= xdomain_property_block_len) {\n> > +\t\tmutex_unlock(&xdomain_lock);\n> > +\t\treturn -EINVAL;\n> > +\t}\n> > +\n> > +\tlen = xdomain_property_block_len - req->offset;\n> > +\tlen = min_t(u16, len, TB_XDP_PROPERTIES_MAX_DATA_LENGTH);\n> > +\ttotal_size = sizeof(*res) + len * 4;\n> > +\n> > +\tres = kzalloc(total_size, GFP_KERNEL);\n> > +\tif (!res) {\n> > +\t\tmutex_unlock(&xdomain_lock);\n> > +\t\treturn -ENOMEM;\n> > +\t}\n> > +\n> > +\ttb_xdp_fill_header(&res->hdr, route, sequence,\n> > PROPERTIES_RESPONSE,\n> > +\t\t\t   total_size);\n> > +\tres->generation = xdomain_property_block_gen;\n> > +\tres->data_length = xdomain_property_block_len;\n> > +\tres->offset = req->offset;\n> > +\tuuid_copy(&res->src_uuid, src_uuid);\n> > +\tuuid_copy(&res->dst_uuid, &req->src_uuid);\n> > +\tmemcpy(res->data, &xdomain_property_block[req->offset],\n> > len\n> > * 4);\n> > +\n> > +\tmutex_unlock(&xdomain_lock);\n> > +\n> > +\tret = __tb_xdomain_response(ctl, res, total_size,\n> > +\t\t\t\t    TB_CFG_PKG_XDOMAIN_RESP);\n> > +\n> > +\tkfree(res);\n> > +\treturn ret;\n> > +}\n> > +\n> > +static int tb_xdp_properties_changed_request(struct tb_ctl *ctl,\n> > u64\n> > route,\n> > +\t\t\t\t\t     int retry, const\n> > uuid_t\n> > *uuid)\n> > +{\n> > +\tstruct tb_xdp_properties_changed_response res;\n> > +\tstruct tb_xdp_properties_changed req;\n> > +\tint ret;\n> > +\n> > +\tmemset(&req, 0, sizeof(req));\n> > +\ttb_xdp_fill_header(&req.hdr, route, retry % 4,\n> > +\t\t\t   PROPERTIES_CHANGED_REQUEST,\n> > sizeof(req));\n> > +\tuuid_copy(&req.src_uuid, uuid);\n> > +\n> > +\tmemset(&res, 0, sizeof(res));\n> > +\tret = __tb_xdomain_request(ctl, &req, sizeof(req),\n> > +\t\t\t\t   TB_CFG_PKG_XDOMAIN_REQ, &res,\n> > sizeof(res),\n> > +\t\t\t\t   TB_CFG_PKG_XDOMAIN_RESP,\n> > +\t\t\t\t   XDOMAIN_DEFAULT_TIMEOUT);\n> > +\tif (ret)\n> > +\t\treturn ret;\n> > +\n> > +\treturn tb_xdp_handle_error(&res.hdr);\n> > +}\n> > +\n> > +static int\n> > +tb_xdp_properties_changed_response(struct tb_ctl *ctl, u64 route,\n> > u8\n> > sequence)\n> > +{\n> > +\tstruct tb_xdp_properties_changed_response res;\n> > +\n> > +\tmemset(&res, 0, sizeof(res));\n> > +\ttb_xdp_fill_header(&res.hdr, route, sequence,\n> > +\t\t\t   PROPERTIES_CHANGED_RESPONSE,\n> > sizeof(res));\n> > +\treturn __tb_xdomain_response(ctl, &res, sizeof(res),\n> > +\t\t\t\t     TB_CFG_PKG_XDOMAIN_RESP);\n> > +}\n> > +\n> > +/**\n> > + * tb_register_protocol_handler() - Register protocol handler\n> > + * @handler: Handler to register\n> > + *\n> > + * This allows XDomain service drivers to hook into incoming\n> > XDomain\n> > + * messages. After this function is called the service driver\n> > needs\n> > to\n> > + * be able to handle calls to callback whenever a package with the\n> > + * registered protocol is received.\n> > + */\n> > +int tb_register_protocol_handler(struct tb_protocol_handler\n> > *handler)\n> > +{\n> > +\tif (!handler->uuid || !handler->callback)\n> > +\t\treturn -EINVAL;\n> > +\tif (uuid_equal(handler->uuid, &tb_xdp_uuid))\n> > +\t\treturn -EINVAL;\n> > +\n> > +\tmutex_lock(&xdomain_lock);\n> > +\tlist_add_tail(&handler->list, &protocol_handlers);\n> > +\tmutex_unlock(&xdomain_lock);\n> > +\n> > +\treturn 0;\n> > +}\n> > +EXPORT_SYMBOL_GPL(tb_register_protocol_handler);\n> > +\n> > +/**\n> > + * tb_unregister_protocol_handler() - Unregister protocol handler\n> > + * @handler: Handler to unregister\n> > + *\n> > + * Removes the previously registered protocol handler.\n> > + */\n> > +void tb_unregister_protocol_handler(struct tb_protocol_handler\n> > *handler)\n> > +{\n> > +\tmutex_lock(&xdomain_lock);\n> > +\tlist_del_init(&handler->list);\n> > +\tmutex_unlock(&xdomain_lock);\n> > +}\n> > +EXPORT_SYMBOL_GPL(tb_unregister_protocol_handler);\n> > +\n> > +static void tb_xdp_handle_request(struct work_struct *work)\n> > +{\n> > +\tstruct xdomain_request_work *xw = container_of(work,\n> > typeof(*xw), work);\n> > +\tconst struct tb_xdp_header *pkg = xw->pkg;\n> > +\tconst struct tb_xdomain_header *xhdr = &pkg->xd_hdr;\n> > +\tstruct tb *tb = xw->tb;\n> > +\tstruct tb_ctl *ctl = tb->ctl;\n> > +\tconst uuid_t *uuid;\n> > +\tint ret = 0;\n> > +\tu8 sequence;\n> > +\tu64 route;\n> > +\n> > +\troute = ((u64)xhdr->route_hi << 32 | xhdr->route_lo) &\n> > ~BIT_ULL(63);\n> > +\tsequence = xhdr->length_sn & TB_XDOMAIN_SN_MASK;\n> > +\tsequence >>= TB_XDOMAIN_SN_SHIFT;\n> > +\n> > +\tmutex_lock(&tb->lock);\n> > +\tif (tb->root_switch)\n> > +\t\tuuid = tb->root_switch->uuid;\n> > +\telse\n> > +\t\tuuid = NULL;\n> > +\tmutex_unlock(&tb->lock);\n> > +\n> > +\tif (!uuid) {\n> > +\t\ttb_xdp_error_response(ctl, route, sequence,\n> > ERROR_NOT_READY);\n> > +\t\tgoto out;\n> > +\t}\n> > +\n> > +\tswitch (pkg->type) {\n> > +\tcase PROPERTIES_REQUEST:\n> > +\t\tret = tb_xdp_properties_response(tb, ctl, route,\n> > sequence, uuid,\n> > +\t\t\t(const struct tb_xdp_properties *)pkg);\n> > +\t\tbreak;\n> > +\n> > +\tcase PROPERTIES_CHANGED_REQUEST: {\n> > +\t\tconst struct tb_xdp_properties_changed *xchg =\n> > +\t\t\t(const struct tb_xdp_properties_changed\n> > *)pkg;\n> > +\t\tstruct tb_xdomain *xd;\n> > +\n> > +\t\tret = tb_xdp_properties_changed_response(ctl,\n> > route,\n> > sequence);\n> > +\n> > +\t\t/*\n> > +\t\t * Since the properties have been changed, let's\n> > update\n> > +\t\t * the xdomain related to this connection as well\n> > in\n> > +\t\t * case there is a change in services it offers.\n> > +\t\t */\n> > +\t\txd = tb_xdomain_find_by_uuid_locked(tb, &xchg-\n> > > src_uuid);\n> > \n> > +\t\tif (xd) {\n> > +\t\t\tqueue_delayed_work(tb->wq, &xd-\n> > > get_properties_work,\n> > \n> > +\t\t\t\t\t   msecs_to_jiffies(50));\n> > +\t\t\ttb_xdomain_put(xd);\n> > +\t\t}\n> > +\n> > +\t\tbreak;\n> > +\t}\n> > +\n> > +\tdefault:\n> > +\t\tbreak;\n> > +\t}\n> > +\n> > +\tif (ret) {\n> > +\t\ttb_warn(tb, \"failed to send XDomain response for\n> > %#x\\n\",\n> > +\t\t\tpkg->type);\n> > +\t}\n> > +\n> > +out:\n> > +\tkfree(xw->pkg);\n> > +\tkfree(xw);\n> > +}\n> > +\n> > +static void\n> > +tb_xdp_schedule_request(struct tb *tb, const struct tb_xdp_header\n> > *hdr,\n> > +\t\t\tsize_t size)\n> > +{\n> > +\tstruct xdomain_request_work *xw;\n> > +\n> > +\txw = kmalloc(sizeof(*xw), GFP_KERNEL);\n> > +\tif (!xw)\n> > +\t\treturn;\n> > +\n> > +\tINIT_WORK(&xw->work, tb_xdp_handle_request);\n> > +\txw->pkg = kmemdup(hdr, size, GFP_KERNEL);\n> > +\txw->tb = tb;\n> > +\n> > +\tqueue_work(tb->wq, &xw->work);\n> > +}\n> > +\n> > +/**\n> > + * tb_register_service_driver() - Register XDomain service driver\n> > + * @drv: Driver to register\n> > + *\n> > + * Registers new service driver from @drv to the bus.\n> > + */\n> > +int tb_register_service_driver(struct tb_service_driver *drv)\n> > +{\n> > +\tdrv->driver.bus = &tb_bus_type;\n> > +\treturn driver_register(&drv->driver);\n> > +}\n> > +EXPORT_SYMBOL_GPL(tb_register_service_driver);\n> > +\n> > +/**\n> > + * tb_unregister_service_driver() - Unregister XDomain service\n> > driver\n> > + * @xdrv: Driver to unregister\n> > + *\n> > + * Unregisters XDomain service driver from the bus.\n> > + */\n> > +void tb_unregister_service_driver(struct tb_service_driver *drv)\n> > +{\n> > +\tdriver_unregister(&drv->driver);\n> > +}\n> > +EXPORT_SYMBOL_GPL(tb_unregister_service_driver);\n> > +\n> > +static ssize_t key_show(struct device *dev, struct\n> > device_attribute\n> > *attr,\n> > +\t\t\tchar *buf)\n> > +{\n> > +\tstruct tb_service *svc = container_of(dev, struct\n> > tb_service, dev);\n> > +\n> > +\t/*\n> > +\t * It should be null terminated but anything else is\n> > pretty\n> > much\n> > +\t * allowed.\n> > +\t */\n> > +\treturn sprintf(buf, \"%*pEp\\n\", (int)strlen(svc->key), svc-\n> > > key);\n> > \n> > +}\n> > +static DEVICE_ATTR_RO(key);\n> > +\n> > +static int get_modalias(struct tb_service *svc, char *buf, size_t\n> > size)\n> > +{\n> > +\treturn snprintf(buf, size, \"tbsvc:k%sp%08Xv%08Xr%08X\",\n> > svc-\n> > > key,\n> > \n> > +\t\t\tsvc->prtcid, svc->prtcvers, svc-\n> > >prtcrevs);\n> > +}\n> > +\n> > +static ssize_t modalias_show(struct device *dev, struct\n> > device_attribute *attr,\n> > +\t\t\t     char *buf)\n> > +{\n> > +\tstruct tb_service *svc = container_of(dev, struct\n> > tb_service, dev);\n> > +\n> > +\t/* Full buffer size except new line and null termination\n> > */\n> > +\tget_modalias(svc, buf, PAGE_SIZE - 2);\n> > +\treturn sprintf(buf, \"%s\\n\", buf);\n> > +}\n> > +static DEVICE_ATTR_RO(modalias);\n> > +\n> > +static ssize_t prtcid_show(struct device *dev, struct\n> > device_attribute *attr,\n> > +\t\t\t   char *buf)\n> > +{\n> > +\tstruct tb_service *svc = container_of(dev, struct\n> > tb_service, dev);\n> > +\n> > +\treturn sprintf(buf, \"%u\\n\", svc->prtcid);\n> > +}\n> > +static DEVICE_ATTR_RO(prtcid);\n> > +\n> > +static ssize_t prtcvers_show(struct device *dev, struct\n> > device_attribute *attr,\n> > +\t\t\t     char *buf)\n> > +{\n> > +\tstruct tb_service *svc = container_of(dev, struct\n> > tb_service, dev);\n> > +\n> > +\treturn sprintf(buf, \"%u\\n\", svc->prtcvers);\n> > +}\n> > +static DEVICE_ATTR_RO(prtcvers);\n> > +\n> > +static ssize_t prtcrevs_show(struct device *dev, struct\n> > device_attribute *attr,\n> > +\t\t\t     char *buf)\n> > +{\n> > +\tstruct tb_service *svc = container_of(dev, struct\n> > tb_service, dev);\n> > +\n> > +\treturn sprintf(buf, \"%u\\n\", svc->prtcrevs);\n> > +}\n> > +static DEVICE_ATTR_RO(prtcrevs);\n> > +\n> > +static ssize_t prtcstns_show(struct device *dev, struct\n> > device_attribute *attr,\n> > +\t\t\t     char *buf)\n> > +{\n> > +\tstruct tb_service *svc = container_of(dev, struct\n> > tb_service, dev);\n> > +\n> > +\treturn sprintf(buf, \"0x%08x\\n\", svc->prtcstns);\n> > +}\n> > +static DEVICE_ATTR_RO(prtcstns);\n> > +\n> > +static struct attribute *tb_service_attrs[] = {\n> > +\t&dev_attr_key.attr,\n> > +\t&dev_attr_modalias.attr,\n> > +\t&dev_attr_prtcid.attr,\n> > +\t&dev_attr_prtcvers.attr,\n> > +\t&dev_attr_prtcrevs.attr,\n> > +\t&dev_attr_prtcstns.attr,\n> > +\tNULL,\n> > +};\n> > +\n> > +static struct attribute_group tb_service_attr_group = {\n> > +\t.attrs = tb_service_attrs,\n> > +};\n> > +\n> > +static const struct attribute_group *tb_service_attr_groups[] = {\n> > +\t&tb_service_attr_group,\n> > +\tNULL,\n> > +};\n> > +\n> > +static int tb_service_uevent(struct device *dev, struct\n> > kobj_uevent_env *env)\n> > +{\n> > +\tstruct tb_service *svc = container_of(dev, struct\n> > tb_service, dev);\n> > +\tchar modalias[64];\n> > +\n> > +\tget_modalias(svc, modalias, sizeof(modalias));\n> > +\treturn add_uevent_var(env, \"MODALIAS=%s\", modalias);\n> > +}\n> > +\n> > +static void tb_service_release(struct device *dev)\n> > +{\n> > +\tstruct tb_service *svc = container_of(dev, struct\n> > tb_service, dev);\n> > +\tstruct tb_xdomain *xd = tb_service_parent(svc);\n> > +\n> > +\tida_simple_remove(&xd->service_ids, svc->id);\n> > +\tkfree(svc->key);\n> > +\tkfree(svc);\n> > +}\n> > +\n> > +struct device_type tb_service_type = {\n> > +\t.name = \"thunderbolt_service\",\n> > +\t.groups = tb_service_attr_groups,\n> > +\t.uevent = tb_service_uevent,\n> > +\t.release = tb_service_release,\n> > +};\n> > +EXPORT_SYMBOL_GPL(tb_service_type);\n> > +\n> > +static int remove_missing_service(struct device *dev, void *data)\n> > +{\n> > +\tstruct tb_xdomain *xd = data;\n> > +\tstruct tb_service *svc;\n> > +\n> > +\tsvc = tb_to_service(dev);\n> > +\tif (!svc)\n> > +\t\treturn 0;\n> > +\n> > +\tif (!tb_property_find(xd->properties, svc->key,\n> > +\t\t\t      TB_PROPERTY_TYPE_DIRECTORY))\n> > +\t\tdevice_unregister(dev);\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +static int find_service(struct device *dev, void *data)\n> > +{\n> > +\tconst struct tb_property *p = data;\n> > +\tstruct tb_service *svc;\n> > +\n> > +\tsvc = tb_to_service(dev);\n> > +\tif (!svc)\n> > +\t\treturn 0;\n> > +\n> > +\treturn !strcmp(svc->key, p->key);\n> > +}\n> > +\n> > +static int populate_service(struct tb_service *svc,\n> > +\t\t\t    struct tb_property *property)\n> > +{\n> > +\tstruct tb_property_dir *dir = property->value.dir;\n> > +\tstruct tb_property *p;\n> > +\n> > +\t/* Fill in standard properties */\n> > +\tp = tb_property_find(dir, \"prtcid\",\n> > TB_PROPERTY_TYPE_VALUE);\n> > +\tif (p)\n> > +\t\tsvc->prtcid = p->value.immediate;\n> > +\tp = tb_property_find(dir, \"prtcvers\",\n> > TB_PROPERTY_TYPE_VALUE);\n> > +\tif (p)\n> > +\t\tsvc->prtcvers = p->value.immediate;\n> > +\tp = tb_property_find(dir, \"prtcrevs\",\n> > TB_PROPERTY_TYPE_VALUE);\n> > +\tif (p)\n> > +\t\tsvc->prtcrevs = p->value.immediate;\n> > +\tp = tb_property_find(dir, \"prtcstns\",\n> > TB_PROPERTY_TYPE_VALUE);\n> > +\tif (p)\n> > +\t\tsvc->prtcstns = p->value.immediate;\n> > +\n> > +\tsvc->key = kstrdup(property->key, GFP_KERNEL);\n> > +\tif (!svc->key)\n> > +\t\treturn -ENOMEM;\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +static void enumerate_services(struct tb_xdomain *xd)\n> > +{\n> > +\tstruct tb_service *svc;\n> > +\tstruct tb_property *p;\n> > +\tstruct device *dev;\n> > +\n> > +\t/*\n> > +\t * First remove all services that are not available\n> > anymore\n> > in\n> > +\t * the updated property block.\n> > +\t */\n> > +\tdevice_for_each_child_reverse(&xd->dev, xd,\n> > remove_missing_service);\n> > +\n> > +\t/* Then re-enumerate properties creating new services as\n> > we\n> > go */\n> > +\ttb_property_for_each(xd->properties, p) {\n> > +\t\tif (p->type != TB_PROPERTY_TYPE_DIRECTORY)\n> > +\t\t\tcontinue;\n> > +\n> > +\t\t/* If the service exists already we are fine */\n> > +\t\tdev = device_find_child(&xd->dev, p,\n> > find_service);\n> > +\t\tif (dev) {\n> > +\t\t\tput_device(dev);\n> > +\t\t\tcontinue;\n> > +\t\t}\n> > +\n> > +\t\tsvc = kzalloc(sizeof(*svc), GFP_KERNEL);\n> > +\t\tif (!svc)\n> > +\t\t\tbreak;\n> > +\n> > +\t\tif (populate_service(svc, p)) {\n> > +\t\t\tkfree(svc);\n> > +\t\t\tbreak;\n> > +\t\t}\n> > +\n> > +\t\tsvc->id = ida_simple_get(&xd->service_ids, 0, 0,\n> > GFP_KERNEL);\n> > +\t\tsvc->dev.bus = &tb_bus_type;\n> > +\t\tsvc->dev.type = &tb_service_type;\n> > +\t\tsvc->dev.parent = &xd->dev;\n> > +\t\tdev_set_name(&svc->dev, \"%s.%d\", dev_name(&xd-\n> > >dev), \n> > svc->id);\n> > +\n> > +\t\tif (device_register(&svc->dev)) {\n> > +\t\t\tput_device(&svc->dev);\n> > +\t\t\tbreak;\n> > +\t\t}\n> > +\t}\n> > +}\n> > +\n> > +static int populate_properties(struct tb_xdomain *xd,\n> > +\t\t\t       struct tb_property_dir *dir)\n> > +{\n> > +\tconst struct tb_property *p;\n> > +\n> > +\t/* Required properties */\n> > +\tp = tb_property_find(dir, \"deviceid\",\n> > TB_PROPERTY_TYPE_VALUE);\n> > +\tif (!p)\n> > +\t\treturn -EINVAL;\n> > +\txd->device = p->value.immediate;\n> > +\n> > +\tp = tb_property_find(dir, \"vendorid\",\n> > TB_PROPERTY_TYPE_VALUE);\n> > +\tif (!p)\n> > +\t\treturn -EINVAL;\n> > +\txd->vendor = p->value.immediate;\n> > +\n> > +\tkfree(xd->device_name);\n> > +\txd->device_name = NULL;\n> > +\tkfree(xd->vendor_name);\n> > +\txd->vendor_name = NULL;\n> > +\n> > +\t/* Optional properties */\n> > +\tp = tb_property_find(dir, \"deviceid\",\n> > TB_PROPERTY_TYPE_TEXT);\n> > +\tif (p)\n> > +\t\txd->device_name = kstrdup(p->value.text,\n> > GFP_KERNEL);\n> > +\tp = tb_property_find(dir, \"vendorid\",\n> > TB_PROPERTY_TYPE_TEXT);\n> > +\tif (p)\n> > +\t\txd->vendor_name = kstrdup(p->value.text,\n> > GFP_KERNEL);\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +/* Called with @xd->lock held */\n> > +static void tb_xdomain_restore_paths(struct tb_xdomain *xd)\n> > +{\n> > +\tif (!xd->resume)\n> > +\t\treturn;\n> > +\n> > +\txd->resume = false;\n> > +\tif (xd->transmit_path) {\n> > +\t\tdev_dbg(&xd->dev, \"re-establishing DMA path\\n\");\n> > +\t\ttb_domain_approve_xdomain_paths(xd->tb, xd);\n> > +\t}\n> > +}\n> > +\n> > +static void tb_xdomain_get_properties(struct work_struct *work)\n> > +{\n> > +\tstruct tb_xdomain *xd = container_of(work, typeof(*xd),\n> > +\t\t\t\t\t     get_properties_work.w\n> > or\n> > k);\n> > +\tstruct tb_property_dir *dir;\n> > +\tstruct tb *tb = xd->tb;\n> > +\tbool update = false;\n> > +\tu32 *block = NULL;\n> > +\tu32 gen = 0;\n> > +\tint ret;\n> > +\n> > +\tret = tb_xdp_properties_request(tb->ctl, xd->route, xd-\n> > > local_uuid,\n> > \n> > +\t\t\t\t\txd->remote_uuid, xd-\n> > > properties_retries,\n> > \n> > +\t\t\t\t\t&block, &gen);\n> > +\tif (ret < 0) {\n> > +\t\tif (xd->properties_retries-- > 0) {\n> > +\t\t\tqueue_delayed_work(xd->tb->wq, &xd-\n> > > get_properties_work,\n> > \n> > +\t\t\t\t\t   msecs_to_jiffies(1000))\n> > ;\n> > +\t\t} else {\n> > +\t\t\t/* Give up now */\n> > +\t\t\tdev_err(&xd->dev,\n> > +\t\t\t\t\"failed read XDomain properties\n> > from\n> > %pUb\\n\",\n> > +\t\t\t\txd->remote_uuid);\n> > +\t\t}\n> > +\t\treturn;\n> > +\t}\n> > +\n> > +\txd->properties_retries = XDOMAIN_PROPERTIES_RETRIES;\n> > +\n> > +\tmutex_lock(&xd->lock);\n> > +\n> > +\t/* Only accept newer generation properties */\n> > +\tif (xd->properties && gen <= xd->property_block_gen) {\n> > +\t\t/*\n> > +\t\t * On resume it is likely that the properties\n> > block\n> > is\n> > +\t\t * not changed (unless the other end added or\n> > removed\n> > +\t\t * services). However, we need to make sure the\n> > existing\n> > +\t\t * DMA paths are restored properly.\n> > +\t\t */\n> > +\t\ttb_xdomain_restore_paths(xd);\n> > +\t\tgoto err_free_block;\n> > +\t}\n> > +\n> > +\tdir = tb_property_parse_dir(block, ret);\n> > +\tif (!dir) {\n> > +\t\tdev_err(&xd->dev, \"failed to parse XDomain\n> > properties\\n\");\n> > +\t\tgoto err_free_block;\n> > +\t}\n> > +\n> > +\tret = populate_properties(xd, dir);\n> > +\tif (ret) {\n> > +\t\tdev_err(&xd->dev, \"missing XDomain properties in\n> > response\\n\");\n> > +\t\tgoto err_free_dir;\n> > +\t}\n> > +\n> > +\t/* Release the existing one */\n> > +\tif (xd->properties) {\n> > +\t\ttb_property_free_dir(xd->properties);\n> > +\t\tupdate = true;\n> > +\t}\n> > +\n> > +\txd->properties = dir;\n> > +\txd->property_block_gen = gen;\n> > +\n> > +\ttb_xdomain_restore_paths(xd);\n> > +\n> > +\tmutex_unlock(&xd->lock);\n> > +\n> > +\tkfree(block);\n> > +\n> > +\t/*\n> > +\t * Now the device should be ready enough so we can add it\n> > to\n> > the\n> > +\t * bus and let userspace know about it. If the device is\n> > already\n> > +\t * registered, we notify the userspace that it has\n> > changed.\n> > +\t */\n> > +\tif (!update) {\n> > +\t\tif (device_add(&xd->dev)) {\n> > +\t\t\tdev_err(&xd->dev, \"failed to add XDomain\n> > device\\n\");\n> > +\t\t\treturn;\n> > +\t\t}\n> > +\t} else {\n> > +\t\tkobject_uevent(&xd->dev.kobj, KOBJ_CHANGE);\n> > +\t}\n> > +\n> > +\tenumerate_services(xd);\n> > +\treturn;\n> > +\n> > +err_free_dir:\n> > +\ttb_property_free_dir(dir);\n> > +err_free_block:\n> > +\tkfree(block);\n> > +\tmutex_unlock(&xd->lock);\n> > +}\n> > +\n> > +static void tb_xdomain_properties_changed(struct work_struct\n> > *work)\n> > +{\n> > +\tstruct tb_xdomain *xd = container_of(work, typeof(*xd),\n> > +\t\t\t\t\t     properties_changed_wo\n> > rk\n> > .work);\n> > +\tint ret;\n> > +\n> > +\tret = tb_xdp_properties_changed_request(xd->tb->ctl, xd-\n> > > route,\n> > \n> > +\t\t\t\txd->properties_changed_retries,\n> > xd-\n> > > local_uuid);\n> > \n> > +\tif (ret) {\n> > +\t\tif (xd->properties_changed_retries-- > 0)\n> > +\t\t\tqueue_delayed_work(xd->tb->wq,\n> > +\t\t\t\t\t   &xd-\n> > > properties_changed_work,\n> > \n> > +\t\t\t\t\t   msecs_to_jiffies(1000))\n> > ;\n> > +\t\treturn;\n> > +\t}\n> > +\n> > +\txd->properties_changed_retries =\n> > XDOMAIN_PROPERTIES_CHANGED_RETRIES;\n> > +}\n> > +\n> > +static ssize_t device_show(struct device *dev, struct\n> > device_attribute *attr,\n> > +\t\t\t   char *buf)\n> > +{\n> > +\tstruct tb_xdomain *xd = container_of(dev, struct\n> > tb_xdomain,\n> > dev);\n> > +\n> > +\treturn sprintf(buf, \"%#x\\n\", xd->device);\n> > +}\n> > +static DEVICE_ATTR_RO(device);\n> > +\n> > +static ssize_t\n> > +device_name_show(struct device *dev, struct device_attribute\n> > *attr,\n> > char *buf)\n> > +{\n> > +\tstruct tb_xdomain *xd = container_of(dev, struct\n> > tb_xdomain,\n> > dev);\n> > +\tint ret;\n> > +\n> > +\tif (mutex_lock_interruptible(&xd->lock))\n> > +\t\treturn -ERESTARTSYS;\n> > +\tret = sprintf(buf, \"%s\\n\", xd->device_name ? xd-\n> > >device_name \n> > : \"\");\n> > +\tmutex_unlock(&xd->lock);\n> > +\n> > +\treturn ret;\n> > +}\n> > +static DEVICE_ATTR_RO(device_name);\n> > +\n> > +static ssize_t vendor_show(struct device *dev, struct\n> > device_attribute *attr,\n> > +\t\t\t   char *buf)\n> > +{\n> > +\tstruct tb_xdomain *xd = container_of(dev, struct\n> > tb_xdomain,\n> > dev);\n> > +\n> > +\treturn sprintf(buf, \"%#x\\n\", xd->vendor);\n> > +}\n> > +static DEVICE_ATTR_RO(vendor);\n> > +\n> > +static ssize_t\n> > +vendor_name_show(struct device *dev, struct device_attribute\n> > *attr,\n> > char *buf)\n> > +{\n> > +\tstruct tb_xdomain *xd = container_of(dev, struct\n> > tb_xdomain,\n> > dev);\n> > +\tint ret;\n> > +\n> > +\tif (mutex_lock_interruptible(&xd->lock))\n> > +\t\treturn -ERESTARTSYS;\n> > +\tret = sprintf(buf, \"%s\\n\", xd->vendor_name ? xd-\n> > >vendor_name \n> > : \"\");\n> > +\tmutex_unlock(&xd->lock);\n> > +\n> > +\treturn ret;\n> > +}\n> > +static DEVICE_ATTR_RO(vendor_name);\n> > +\n> > +static ssize_t unique_id_show(struct device *dev, struct\n> > device_attribute *attr,\n> > +\t\t\t      char *buf)\n> > +{\n> > +\tstruct tb_xdomain *xd = container_of(dev, struct\n> > tb_xdomain,\n> > dev);\n> > +\n> > +\treturn sprintf(buf, \"%pUb\\n\", xd->remote_uuid);\n> > +}\n> > +static DEVICE_ATTR_RO(unique_id);\n> > +\n> > +static struct attribute *xdomain_attrs[] = {\n> > +\t&dev_attr_device.attr,\n> > +\t&dev_attr_device_name.attr,\n> > +\t&dev_attr_unique_id.attr,\n> > +\t&dev_attr_vendor.attr,\n> > +\t&dev_attr_vendor_name.attr,\n> > +\tNULL,\n> > +};\n> > +\n> > +static struct attribute_group xdomain_attr_group = {\n> > +\t.attrs = xdomain_attrs,\n> > +};\n> > +\n> > +static const struct attribute_group *xdomain_attr_groups[] = {\n> > +\t&xdomain_attr_group,\n> > +\tNULL,\n> > +};\n> > +\n> > +static void tb_xdomain_release(struct device *dev)\n> > +{\n> > +\tstruct tb_xdomain *xd = container_of(dev, struct\n> > tb_xdomain,\n> > dev);\n> > +\n> > +\tput_device(xd->dev.parent);\n> > +\n> > +\ttb_property_free_dir(xd->properties);\n> > +\tida_destroy(&xd->service_ids);\n> > +\n> > +\tkfree(xd->local_uuid);\n> > +\tkfree(xd->remote_uuid);\n> > +\tkfree(xd->device_name);\n> > +\tkfree(xd->vendor_name);\n> > +\tkfree(xd);\n> > +}\n> > +\n> > +static void start_handshake(struct tb_xdomain *xd)\n> > +{\n> > +\txd->properties_retries = XDOMAIN_PROPERTIES_RETRIES;\n> > +\txd->properties_changed_retries =\n> > XDOMAIN_PROPERTIES_CHANGED_RETRIES;\n> > +\n> > +\t/* Start exchanging properties with the other host */\n> > +\tqueue_delayed_work(xd->tb->wq, &xd-\n> > >properties_changed_work,\n> > +\t\t\t   msecs_to_jiffies(100));\n> > +\tqueue_delayed_work(xd->tb->wq, &xd->get_properties_work,\n> > +\t\t\t   msecs_to_jiffies(1000));\n> > +}\n> > +\n> > +static void stop_handshake(struct tb_xdomain *xd)\n> > +{\n> > +\txd->properties_retries = 0;\n> > +\txd->properties_changed_retries = 0;\n> > +\n> > +\tcancel_delayed_work_sync(&xd->get_properties_work);\n> > +\tcancel_delayed_work_sync(&xd->properties_changed_work);\n> > +}\n> > +\n> > +static int __maybe_unused tb_xdomain_suspend(struct device *dev)\n> > +{\n> > +\tstop_handshake(tb_to_xdomain(dev));\n> > +\treturn 0;\n> > +}\n> > +\n> > +static int __maybe_unused tb_xdomain_resume(struct device *dev)\n> > +{\n> > +\tstruct tb_xdomain *xd = tb_to_xdomain(dev);\n> > +\n> > +\t/*\n> > +\t * Ask tb_xdomain_get_properties() restore any existing\n> > DMA\n> > +\t * paths after properties are re-read.\n> > +\t */\n> > +\txd->resume = true;\n> > +\tstart_handshake(xd);\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +static const struct dev_pm_ops tb_xdomain_pm_ops = {\n> > +\tSET_SYSTEM_SLEEP_PM_OPS(tb_xdomain_suspend,\n> > tb_xdomain_resume)\n> > +};\n> > +\n> > +struct device_type tb_xdomain_type = {\n> > +\t.name = \"thunderbolt_xdomain\",\n> > +\t.release = tb_xdomain_release,\n> > +\t.pm = &tb_xdomain_pm_ops,\n> > +};\n> > +EXPORT_SYMBOL_GPL(tb_xdomain_type);\n> > +\n> > +/**\n> > + * tb_xdomain_alloc() - Allocate new XDomain object\n> > + * @tb: Domain where the XDomain belongs\n> > + * @parent: Parent device (the switch through the connection to\n> > the\n> > + *\t    other domain is reached).\n> > + * @route: Route string used to reach the other domain\n> > + * @local_uuid: Our local domain UUID\n> > + * @remote_uuid: UUID of the other domain\n> > + *\n> > + * Allocates new XDomain structure and returns pointer to that.\n> > The\n> > + * object must be released by calling tb_xdomain_put().\n> > + */\n> > +struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device\n> > *parent,\n> > +\t\t\t\t    u64 route, const uuid_t\n> > *local_uuid,\n> > +\t\t\t\t    const uuid_t *remote_uuid)\n> > +{\n> > +\tstruct tb_xdomain *xd;\n> > +\n> > +\txd = kzalloc(sizeof(*xd), GFP_KERNEL);\n> > +\tif (!xd)\n> > +\t\treturn NULL;\n> > +\n> > +\txd->tb = tb;\n> > +\txd->route = route;\n> > +\tida_init(&xd->service_ids);\n> > +\tmutex_init(&xd->lock);\n> > +\tINIT_DELAYED_WORK(&xd->get_properties_work,\n> > tb_xdomain_get_properties);\n> > +\tINIT_DELAYED_WORK(&xd->properties_changed_work,\n> > +\t\t\t  tb_xdomain_properties_changed);\n> > +\n> > +\txd->local_uuid = kmemdup(local_uuid, sizeof(uuid_t),\n> > GFP_KERNEL);\n> > +\tif (!xd->local_uuid)\n> > +\t\tgoto err_free;\n> > +\n> > +\txd->remote_uuid = kmemdup(remote_uuid, sizeof(uuid_t),\n> > GFP_KERNEL);\n> > +\tif (!xd->remote_uuid)\n> > +\t\tgoto err_free_local_uuid;\n> > +\n> > +\tdevice_initialize(&xd->dev);\n> > +\txd->dev.parent = get_device(parent);\n> > +\txd->dev.bus = &tb_bus_type;\n> > +\txd->dev.type = &tb_xdomain_type;\n> > +\txd->dev.groups = xdomain_attr_groups;\n> > +\tdev_set_name(&xd->dev, \"%u-%llx\", tb->index, route);\n> > +\n> > +\treturn xd;\n> > +\n> > +err_free_local_uuid:\n> > +\tkfree(xd->local_uuid);\n> > +err_free:\n> > +\tkfree(xd);\n> > +\n> > +\treturn NULL;\n> > +}\n> > +\n> > +/**\n> > + * tb_xdomain_add() - Add XDomain to the bus\n> > + * @xd: XDomain to add\n> > + *\n> > + * This function starts XDomain discovery protocol handshake and\n> > + * eventually adds the XDomain to the bus. After calling this\n> > function\n> > + * the caller needs to call tb_xdomain_remove() in order to remove\n> > and\n> > + * release the object regardless whether the handshake succeeded\n> > or\n> > not.\n> > + */\n> > +void tb_xdomain_add(struct tb_xdomain *xd)\n> > +{\n> > +\t/* Start exchanging properties with the other host */\n> > +\tstart_handshake(xd);\n> > +}\n> > +\n> > +static int unregister_service(struct device *dev, void *data)\n> > +{\n> > +\tdevice_unregister(dev);\n> > +\treturn 0;\n> > +}\n> > +\n> > +/**\n> > + * tb_xdomain_remove() - Remove XDomain from the bus\n> > + * @xd: XDomain to remove\n> > + *\n> > + * This will stop all ongoing configuration work and remove the\n> > XDomain\n> > + * along with any services from the bus. When the last reference\n> > to\n> > @xd\n> > + * is released the object will be released as well.\n> > + */\n> > +void tb_xdomain_remove(struct tb_xdomain *xd)\n> > +{\n> > +\tstop_handshake(xd);\n> > +\n> > +\tdevice_for_each_child_reverse(&xd->dev, xd,\n> > unregister_service);\n> > +\n> > +\tif (!device_is_registered(&xd->dev))\n> > +\t\tput_device(&xd->dev);\n> > +\telse\n> > +\t\tdevice_unregister(&xd->dev);\n> > +}\n> > +\n> > +/**\n> > + * tb_xdomain_enable_paths() - Enable DMA paths for XDomain\n> > connection\n> > + * @xd: XDomain connection\n> > + * @transmit_path: HopID of the transmit path the other end is\n> > using\n> > to\n> > + *\t\t   send packets\n> > + * @transmit_ring: DMA ring used to receive packets from the other\n> > end\n> > + * @receive_path: HopID of the receive path the other end is using\n> > to\n> > + *\t\t  receive packets\n> > + * @receive_ring: DMA ring used to send packets to the other end\n> > + *\n> > + * The function enables DMA paths accordingly so that after\n> > successful\n> > + * return the caller can send and receive packets using high-speed\n> > DMA\n> > + * path.\n> > + *\n> > + * Return: %0 in case of success and negative errno in case of\n> > error\n> > + */\n> > +int tb_xdomain_enable_paths(struct tb_xdomain *xd, u16\n> > transmit_path,\n> > +\t\t\t    u16 transmit_ring, u16 receive_path,\n> > +\t\t\t    u16 receive_ring)\n> > +{\n> > +\tint ret;\n> > +\n> > +\tmutex_lock(&xd->lock);\n> > +\n> > +\tif (xd->transmit_path) {\n> > +\t\tret = xd->transmit_path == transmit_path ? 0 :\n> > -EBUSY;\n> > +\t\tgoto exit_unlock;\n> > +\t}\n> > +\n> > +\txd->transmit_path = transmit_path;\n> > +\txd->transmit_ring = transmit_ring;\n> > +\txd->receive_path = receive_path;\n> > +\txd->receive_ring = receive_ring;\n> > +\n> > +\tret = tb_domain_approve_xdomain_paths(xd->tb, xd);\n> > +\n> > +exit_unlock:\n> > +\tmutex_unlock(&xd->lock);\n> > +\n> > +\treturn ret;\n> > +}\n> > +EXPORT_SYMBOL_GPL(tb_xdomain_enable_paths);\n> > +\n> > +/**\n> > + * tb_xdomain_disable_paths() - Disable DMA paths for XDomain\n> > connection\n> > + * @xd: XDomain connection\n> > + *\n> > + * This does the opposite of tb_xdomain_enable_paths(). After call\n> > to\n> > + * this the caller is not expected to use the rings anymore.\n> > + *\n> > + * Return: %0 in case of success and negative errno in case of\n> > error\n> > + */\n> > +int tb_xdomain_disable_paths(struct tb_xdomain *xd)\n> > +{\n> > +\tint ret = 0;\n> > +\n> > +\tmutex_lock(&xd->lock);\n> > +\tif (xd->transmit_path) {\n> > +\t\txd->transmit_path = 0;\n> > +\t\txd->transmit_ring = 0;\n> > +\t\txd->receive_path = 0;\n> > +\t\txd->receive_ring = 0;\n> > +\n> > +\t\tret = tb_domain_disconnect_xdomain_paths(xd->tb,\n> > xd);\n> > +\t}\n> > +\tmutex_unlock(&xd->lock);\n> > +\n> > +\treturn ret;\n> > +}\n> > +EXPORT_SYMBOL_GPL(tb_xdomain_disable_paths);\n> > +\n> > +struct tb_xdomain_lookup {\n> > +\tconst uuid_t *uuid;\n> > +\tu8 link;\n> > +\tu8 depth;\n> > +};\n> > +\n> > +static struct tb_xdomain *switch_find_xdomain(struct tb_switch\n> > *sw,\n> > +\tconst struct tb_xdomain_lookup *lookup)\n> > +{\n> > +\tint i;\n> > +\n> > +\tfor (i = 1; i <= sw->config.max_port_number; i++) {\n> > +\t\tstruct tb_port *port = &sw->ports[i];\n> > +\t\tstruct tb_xdomain *xd;\n> > +\n> > +\t\tif (tb_is_upstream_port(port))\n> > +\t\t\tcontinue;\n> > +\n> > +\t\tif (port->xdomain) {\n> > +\t\t\txd = port->xdomain;\n> > +\n> > +\t\t\tif (lookup->uuid) {\n> > +\t\t\t\tif (uuid_equal(xd->remote_uuid,\n> > lookup->uuid))\n> > +\t\t\t\t\treturn xd;\n> > +\t\t\t} else if (lookup->link == xd->link &&\n> > +\t\t\t\t   lookup->depth == xd->depth) {\n> > +\t\t\t\treturn xd;\n> > +\t\t\t}\n> > +\t\t} else if (port->remote) {\n> > +\t\t\txd = switch_find_xdomain(port->remote->sw,\n> > lookup);\n> > +\t\t\tif (xd)\n> > +\t\t\t\treturn xd;\n> > +\t\t}\n> > +\t}\n> > +\n> > +\treturn NULL;\n> > +}\n> > +\n> > +/**\n> > + * tb_xdomain_find_by_uuid() - Find an XDomain by UUID\n> > + * @tb: Domain where the XDomain belongs to\n> > + * @uuid: UUID to look for\n> > + *\n> > + * Finds XDomain by walking through the Thunderbolt topology below\n> > @tb.\n> > + * The returned XDomain will have its reference count increased so\n> > the\n> > + * caller needs to call tb_xdomain_put() when it is done with the\n> > + * object.\n> > + *\n> > + * This will find all XDomains including the ones that are not yet\n> > added\n> > + * to the bus (handshake is still in progress).\n> > + *\n> > + * The caller needs to hold @tb->lock.\n> > + */\n> > +struct tb_xdomain *tb_xdomain_find_by_uuid(struct tb *tb, const\n> > uuid_t *uuid)\n> > +{\n> > +\tstruct tb_xdomain_lookup lookup;\n> > +\tstruct tb_xdomain *xd;\n> > +\n> > +\tmemset(&lookup, 0, sizeof(lookup));\n> > +\tlookup.uuid = uuid;\n> > +\n> > +\txd = switch_find_xdomain(tb->root_switch, &lookup);\n> > +\tif (xd) {\n> > +\t\tget_device(&xd->dev);\n> > +\t\treturn xd;\n> > +\t}\n> > +\n> > +\treturn NULL;\n> > +}\n> > +EXPORT_SYMBOL_GPL(tb_xdomain_find_by_uuid);\n> > +\n> > +/**\n> > + * tb_xdomain_find_by_link_depth() - Find an XDomain by link and\n> > depth\n> > + * @tb: Domain where the XDomain belongs to\n> > + * @link: Root switch link number\n> > + * @depth: Depth in the link\n> > + *\n> > + * Finds XDomain by walking through the Thunderbolt topology below\n> > @tb.\n> > + * The returned XDomain will have its reference count increased so\n> > the\n> > + * caller needs to call tb_xdomain_put() when it is done with the\n> > + * object.\n> > + *\n> > + * This will find all XDomains including the ones that are not yet\n> > added\n> > + * to the bus (handshake is still in progress).\n> > + *\n> > + * The caller needs to hold @tb->lock.\n> > + */\n> > +struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8\n> > link,\n> > +\t\t\t\t\t\t u8 depth)\n> > +{\n> > +\tstruct tb_xdomain_lookup lookup;\n> > +\tstruct tb_xdomain *xd;\n> > +\n> > +\tmemset(&lookup, 0, sizeof(lookup));\n> > +\tlookup.link = link;\n> > +\tlookup.depth = depth;\n> > +\n> > +\txd = switch_find_xdomain(tb->root_switch, &lookup);\n> > +\tif (xd) {\n> > +\t\tget_device(&xd->dev);\n> > +\t\treturn xd;\n> > +\t}\n> > +\n> > +\treturn NULL;\n> > +}\n> > +\n> > +bool tb_xdomain_handle_request(struct tb *tb, enum tb_cfg_pkg_type\n> > type,\n> > +\t\t\t       const void *buf, size_t size)\n> > +{\n> > +\tconst struct tb_protocol_handler *handler, *tmp;\n> > +\tconst struct tb_xdp_header *hdr = buf;\n> > +\tunsigned int length;\n> > +\tint ret = 0;\n> > +\n> > +\t/* We expect the packet is at least size of the header */\n> > +\tlength = hdr->xd_hdr.length_sn & TB_XDOMAIN_LENGTH_MASK;\n> > +\tif (length != size / 4 - sizeof(hdr->xd_hdr) / 4)\n> > +\t\treturn true;\n> > +\tif (length < sizeof(*hdr) / 4 - sizeof(hdr->xd_hdr) / 4)\n> > +\t\treturn true;\n> > +\n> > +\t/*\n> > +\t * Handle XDomain discovery protocol packets directly\n> > here.\n> > For\n> > +\t * other protocols (based on their UUID) we call\n> > registered\n> > +\t * handlers in turn.\n> > +\t */\n> > +\tif (uuid_equal(&hdr->uuid, &tb_xdp_uuid)) {\n> > +\t\tif (type == TB_CFG_PKG_XDOMAIN_REQ) {\n> > +\t\t\ttb_xdp_schedule_request(tb, hdr, size);\n> > +\t\t\treturn true;\n> > +\t\t}\n> > +\t\treturn false;\n> > +\t}\n> > +\n> > +\tmutex_lock(&xdomain_lock);\n> > +\tlist_for_each_entry_safe(handler, tmp, &protocol_handlers,\n> > list) {\n> > +\t\tif (!uuid_equal(&hdr->uuid, handler->uuid))\n> > +\t\t\tcontinue;\n> > +\n> > +\t\tmutex_unlock(&xdomain_lock);\n> > +\t\tret = handler->callback(buf, size, handler->data);\n> > +\t\tmutex_lock(&xdomain_lock);\n> > +\n> > +\t\tif (ret)\n> > +\t\t\tbreak;\n> > +\t}\n> > +\tmutex_unlock(&xdomain_lock);\n> > +\n> > +\treturn ret > 0;\n> > +}\n> > +\n> > +static int rebuild_property_block(void)\n> > +{\n> > +\tu32 *block, len;\n> > +\tint ret;\n> > +\n> > +\tret = tb_property_format_dir(xdomain_property_dir, NULL,\n> > 0);\n> > +\tif (ret < 0)\n> > +\t\treturn ret;\n> > +\n> > +\tlen = ret;\n> > +\n> > +\tblock = kcalloc(len, sizeof(u32), GFP_KERNEL);\n> > +\tif (!block)\n> > +\t\treturn -ENOMEM;\n> > +\n> > +\tret = tb_property_format_dir(xdomain_property_dir, block,\n> > len);\n> > +\tif (ret) {\n> > +\t\tkfree(block);\n> > +\t\treturn ret;\n> > +\t}\n> > +\n> > +\tkfree(xdomain_property_block);\n> > +\txdomain_property_block = block;\n> > +\txdomain_property_block_len = len;\n> > +\txdomain_property_block_gen++;\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +static int update_xdomain(struct device *dev, void *data)\n> > +{\n> > +\tstruct tb_xdomain *xd;\n> > +\n> > +\txd = tb_to_xdomain(dev);\n> > +\tif (xd) {\n> > +\t\tqueue_delayed_work(xd->tb->wq, &xd-\n> > > properties_changed_work,\n> > \n> > +\t\t\t\t   msecs_to_jiffies(50));\n> > +\t}\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +static void update_all_xdomains(void)\n> > +{\n> > +\tbus_for_each_dev(&tb_bus_type, NULL, NULL,\n> > update_xdomain);\n> > +}\n> > +\n> > +static bool remove_directory(const char *key, const struct\n> > tb_property_dir *dir)\n> > +{\n> > +\tstruct tb_property *p;\n> > +\n> > +\tp = tb_property_find(xdomain_property_dir, key,\n> > +\t\t\t     TB_PROPERTY_TYPE_DIRECTORY);\n> > +\tif (p && p->value.dir == dir) {\n> > +\t\ttb_property_remove(p);\n> > +\t\treturn true;\n> > +\t}\n> > +\treturn false;\n> > +}\n> > +\n> > +/**\n> > + * tb_register_property_dir() - Register property directory to the\n> > host\n> > + * @key: Key (name) of the directory to add\n> > + * @dir: Directory to add\n> > + *\n> > + * Service drivers can use this function to add new property\n> > directory\n> > + * to the host available properties. The other connected hosts are\n> > + * notified so they can re-read properties of this host if they\n> > are\n> > + * interested.\n> > + *\n> > + * Return: %0 on success and negative errno on failure\n> > + */\n> > +int tb_register_property_dir(const char *key, struct\n> > tb_property_dir\n> > *dir)\n> > +{\n> > +\tint ret;\n> > +\n> > +\tif (!key || strlen(key) > 8)\n> > +\t\treturn -EINVAL;\n> > +\n> > +\tmutex_lock(&xdomain_lock);\n> > +\tif (tb_property_find(xdomain_property_dir, key,\n> > +\t\t\t     TB_PROPERTY_TYPE_DIRECTORY)) {\n> > +\t\tret = -EEXIST;\n> > +\t\tgoto err_unlock;\n> > +\t}\n> > +\n> > +\tret = tb_property_add_dir(xdomain_property_dir, key, dir);\n> > +\tif (ret)\n> > +\t\tgoto err_unlock;\n> > +\n> > +\tret = rebuild_property_block();\n> > +\tif (ret) {\n> > +\t\tremove_directory(key, dir);\n> > +\t\tgoto err_unlock;\n> > +\t}\n> > +\n> > +\tmutex_unlock(&xdomain_lock);\n> > +\tupdate_all_xdomains();\n> > +\treturn 0;\n> > +\n> > +err_unlock:\n> > +\tmutex_unlock(&xdomain_lock);\n> > +\treturn ret;\n> > +}\n> > +EXPORT_SYMBOL_GPL(tb_register_property_dir);\n> > +\n> > +/**\n> > + * tb_unregister_property_dir() - Removes property directory from\n> > host\n> > + * @key: Key (name) of the directory\n> > + * @dir: Directory to remove\n> > + *\n> > + * This will remove the existing directory from this host and\n> > notify\n> > the\n> > + * connected hosts about the change.\n> > + */\n> > +void tb_unregister_property_dir(const char *key, struct\n> > tb_property_dir *dir)\n> > +{\n> > +\tint ret = 0;\n> > +\n> > +\tmutex_lock(&xdomain_lock);\n> > +\tif (remove_directory(key, dir))\n> > +\t\tret = rebuild_property_block();\n> > +\tmutex_unlock(&xdomain_lock);\n> > +\n> > +\tif (!ret)\n> > +\t\tupdate_all_xdomains();\n> > +}\n> > +EXPORT_SYMBOL_GPL(tb_unregister_property_dir);\n> > +\n> > +int tb_xdomain_init(void)\n> > +{\n> > +\tint ret;\n> > +\n> > +\txdomain_property_dir = tb_property_create_dir(NULL);\n> > +\tif (!xdomain_property_dir)\n> > +\t\treturn -ENOMEM;\n> > +\n> > +\t/*\n> > +\t * Initialize standard set of properties without any\n> > service\n> > +\t * directories. Those will be added by service drivers\n> > +\t * themselves when they are loaded.\n> > +\t */\n> > +\ttb_property_add_immediate(xdomain_property_dir,\n> > \"vendorid\",\n> > +\t\t\t\t  PCI_VENDOR_ID_INTEL);\n> > +\ttb_property_add_text(xdomain_property_dir, \"vendorid\",\n> > \"Intel Corp.\");\n> > +\ttb_property_add_immediate(xdomain_property_dir,\n> > \"deviceid\",\n> > 0x1);\n> > +\ttb_property_add_text(xdomain_property_dir, \"deviceid\",\n> > +\t\t\t     utsname()->nodename);\n> > +\ttb_property_add_immediate(xdomain_property_dir,\n> > \"devicerv\",\n> > 0x80000100);\n> > +\n> > +\tret = rebuild_property_block();\n> > +\tif (ret) {\n> > +\t\ttb_property_free_dir(xdomain_property_dir);\n> > +\t\txdomain_property_dir = NULL;\n> > +\t}\n> > +\n> > +\treturn ret;\n> > +}\n> > +\n> > +void tb_xdomain_exit(void)\n> > +{\n> > +\tkfree(xdomain_property_block);\n> > +\ttb_property_free_dir(xdomain_property_dir);\n> > +}\n> > diff --git a/include/linux/mod_devicetable.h\n> > b/include/linux/mod_devicetable.h\n> > index 694cebb50f72..7625c3b81f84 100644\n> > --- a/include/linux/mod_devicetable.h\n> > +++ b/include/linux/mod_devicetable.h\n> > @@ -683,5 +683,31 @@ struct fsl_mc_device_id {\n> >  \tconst char obj_type[16];\n> >  };\n> >  \n> > +/**\n> > + * struct tb_service_id - Thunderbolt service identifiers\n> > + * @match_flags: Flags used to match the structure\n> > + * @protocol_key: Protocol key the service supports\n> > + * @protocol_id: Protocol id the service supports\n> > + * @protocol_version: Version of the protocol\n> > + * @protocol_revision: Revision of the protocol software\n> > + * @driver_data: Driver specific data\n> > + *\n> > + * Thunderbolt XDomain services are exposed as devices where each\n> > device\n> > + * carries the protocol information the service supports.\n> > Thunderbolt\n> > + * XDomain service drivers match against that information.\n> > + */\n> > +struct tb_service_id {\n> > +\t__u32 match_flags;\n> > +\tchar protocol_key[8 + 1];\n> > +\t__u32 protocol_id;\n> > +\t__u32 protocol_version;\n> > +\t__u32 protocol_revision;\n> > +\tkernel_ulong_t driver_data;\n> > +};\n> > +\n> > +#define TBSVC_MATCH_PROTOCOL_KEY\t0x0001\n> > +#define TBSVC_MATCH_PROTOCOL_ID\t\t0x0002\n> > +#define TBSVC_MATCH_PROTOCOL_VERSION\t0x0004\n> > +#define TBSVC_MATCH_PROTOCOL_REVISION\t0x0008\n> >  \n> >  #endif /* LINUX_MOD_DEVICETABLE_H */\n> > diff --git a/include/linux/thunderbolt.h\n> > b/include/linux/thunderbolt.h\n> > index 4011d6537a8c..79abdaf1c296 100644\n> > --- a/include/linux/thunderbolt.h\n> > +++ b/include/linux/thunderbolt.h\n> > @@ -17,6 +17,7 @@\n> >  #include <linux/device.h>\n> >  #include <linux/list.h>\n> >  #include <linux/mutex.h>\n> > +#include <linux/mod_devicetable.h>\n> >  #include <linux/uuid.h>\n> >  \n> >  enum tb_cfg_pkg_type {\n> > @@ -77,6 +78,8 @@ struct tb {\n> >  };\n> >  \n> >  extern struct bus_type tb_bus_type;\n> > +extern struct device_type tb_service_type;\n> > +extern struct device_type tb_xdomain_type;\n> >  \n> >  #define TB_LINKS_PER_PHY_PORT\t2\n> >  \n> > @@ -155,4 +158,243 @@ struct tb_property\n> > *tb_property_get_next(struct\n> > tb_property_dir *dir,\n> >  \t     property;\t\t\t\t\t\t\n> > \\\n> >  \t     property = tb_property_get_next(dir, property))\n> >  \n> > +int tb_register_property_dir(const char *key, struct\n> > tb_property_dir\n> > *dir);\n> > +void tb_unregister_property_dir(const char *key, struct\n> > tb_property_dir *dir);\n> > +\n> > +/**\n> > + * struct tb_xdomain - Cross-domain (XDomain) connection\n> > + * @dev: XDomain device\n> > + * @tb: Pointer to the domain\n> > + * @remote_uuid: UUID of the remote domain (host)\n> > + * @local_uuid: Cached local UUID\n> > + * @route: Route string the other domain can be reached\n> > + * @vendor: Vendor ID of the remote domain\n> > + * @device: Device ID of the demote domain\n> > + * @lock: Lock to serialize access to the following fields of this\n> > structure\n> > + * @vendor_name: Name of the vendor (or %NULL if not known)\n> > + * @device_name: Name of the device (or %NULL if not known)\n> > + * @is_unplugged: The XDomain is unplugged\n> > + * @resume: The XDomain is being resumed\n> > + * @transmit_path: HopID which the remote end expects us to\n> > transmit\n> > + * @transmit_ring: Local ring (hop) where outgoing packets are\n> > pushed\n> > + * @receive_path: HopID which we expect the remote end to transmit\n> > + * @receive_ring: Local ring (hop) where incoming packets arrive\n> > + * @service_ids: Used to generate IDs for the services\n> > + * @properties: Properties exported by the remote domain\n> > + * @property_block_gen: Generation of @properties\n> > + * @properties_lock: Lock protecting @properties.\n> > + * @get_properties_work: Work used to get remote domain properties\n> > + * @properties_retries: Number of times left to read properties\n> > + * @properties_changed_work: Work used to notify the remote domain\n> > that\n> > + *\t\t\t     our properties have changed\n> > + * @properties_changed_retries: Number of times left to send\n> > properties\n> > + *\t\t\t\tchanged notification\n> > + * @link: Root switch link the remote domain is connected (ICM\n> > only)\n> > + * @depth: Depth in the chain the remote domain is connected (ICM\n> > only)\n> > + *\n> > + * This structure represents connection across two domains\n> > (hosts).\n> > + * Each XDomain contains zero or more services which are exposed\n> > as\n> > + * &struct tb_service objects.\n> > + *\n> > + * Service drivers may access this structure if they need to\n> > enumerate\n> > + * non-standard properties but they need hold @lock when doing so\n> > + * because properties can be changed asynchronously in response to\n> > + * changes in the remote domain.\n> > + */\n> > +struct tb_xdomain {\n> > +\tstruct device dev;\n> > +\tstruct tb *tb;\n> > +\tuuid_t *remote_uuid;\n> > +\tconst uuid_t *local_uuid;\n> > +\tu64 route;\n> > +\tu16 vendor;\n> > +\tu16 device;\n> > +\tstruct mutex lock;\n> > +\tconst char *vendor_name;\n> > +\tconst char *device_name;\n> > +\tbool is_unplugged;\n> > +\tbool resume;\n> > +\tu16 transmit_path;\n> > +\tu16 transmit_ring;\n> > +\tu16 receive_path;\n> > +\tu16 receive_ring;\n> > +\tstruct ida service_ids;\n> > +\tstruct tb_property_dir *properties;\n> > +\tu32 property_block_gen;\n> > +\tstruct delayed_work get_properties_work;\n> > +\tint properties_retries;\n> > +\tstruct delayed_work properties_changed_work;\n> > +\tint properties_changed_retries;\n> > +\tu8 link;\n> > +\tu8 depth;\n> > +};\n> > +\n> > +int tb_xdomain_enable_paths(struct tb_xdomain *xd, u16\n> > transmit_path,\n> > +\t\t\t    u16 transmit_ring, u16 receive_path,\n> > +\t\t\t    u16 receive_ring);\n> > +int tb_xdomain_disable_paths(struct tb_xdomain *xd);\n> > +struct tb_xdomain *tb_xdomain_find_by_uuid(struct tb *tb, const\n> > uuid_t *uuid);\n> > +\n> > +static inline struct tb_xdomain *\n> > +tb_xdomain_find_by_uuid_locked(struct tb *tb, const uuid_t *uuid)\n> > +{\n> > +\tstruct tb_xdomain *xd;\n> > +\n> > +\tmutex_lock(&tb->lock);\n> > +\txd = tb_xdomain_find_by_uuid(tb, uuid);\n> > +\tmutex_unlock(&tb->lock);\n> > +\n> > +\treturn xd;\n> > +}\n> > +\n> > +static inline struct tb_xdomain *tb_xdomain_get(struct tb_xdomain\n> > *xd)\n> > +{\n> > +\tif (xd)\n> > +\t\tget_device(&xd->dev);\n> > +\treturn xd;\n> > +}\n> > +\n> > +static inline void tb_xdomain_put(struct tb_xdomain *xd)\n> > +{\n> > +\tif (xd)\n> > +\t\tput_device(&xd->dev);\n> > +}\n> > +\n> > +static inline bool tb_is_xdomain(const struct device *dev)\n> > +{\n> > +\treturn dev->type == &tb_xdomain_type;\n> > +}\n> > +\n> > +static inline struct tb_xdomain *tb_to_xdomain(struct device *dev)\n> > +{\n> > +\tif (tb_is_xdomain(dev))\n> > +\t\treturn container_of(dev, struct tb_xdomain, dev);\n> > +\treturn NULL;\n> > +}\n> > +\n> > +int tb_xdomain_response(struct tb_xdomain *xd, const void\n> > *response,\n> > +\t\t\tsize_t size, enum tb_cfg_pkg_type type);\n> > +int tb_xdomain_request(struct tb_xdomain *xd, const void *request,\n> > +\t\t       size_t request_size, enum tb_cfg_pkg_type\n> > request_type,\n> > +\t\t       void *response, size_t response_size,\n> > +\t\t       enum tb_cfg_pkg_type response_type,\n> > +\t\t       unsigned int timeout_msec);\n> > +\n> > +/**\n> > + * tb_protocol_handler - Protocol specific handler\n> > + * @uuid: XDomain messages with this UUID are dispatched to this\n> > handler\n> > + * @callback: Callback called with the XDomain message. Returning\n> > %1\n> > + *\t      here tells the XDomain core that the message was\n> > handled\n> > + *\t      by this handler and should not be forwared to\n> > other\n> > + *\t      handlers.\n> > + * @data: Data passed with the callback\n> > + * @list: Handlers are linked using this\n> > + *\n> > + * Thunderbolt services can hook into incoming XDomain requests by\n> > + * registering protocol handler. Only limitation is that the\n> > XDomain\n> > + * discovery protocol UUID cannot be registered since it is\n> > handled\n> > by\n> > + * the core XDomain code.\n> > + *\n> > + * The @callback must check that the message is really directed to\n> > the\n> > + * service the driver implements.\n> > + */\n> > +struct tb_protocol_handler {\n> > +\tconst uuid_t *uuid;\n> > +\tint (*callback)(const void *buf, size_t size, void *data);\n> > +\tvoid *data;\n> > +\tstruct list_head list;\n> > +};\n> > +\n> > +int tb_register_protocol_handler(struct tb_protocol_handler\n> > *handler);\n> > +void tb_unregister_protocol_handler(struct tb_protocol_handler\n> > *handler);\n> > +\n> > +/**\n> > + * struct tb_service - Thunderbolt service\n> > + * @dev: XDomain device\n> > + * @id: ID of the service (shown in sysfs)\n> > + * @key: Protocol key from the properties directory\n> > + * @prtcid: Protocol ID from the properties directory\n> > + * @prtcvers: Protocol version from the properties directory\n> > + * @prtcrevs: Protocol software revision from the properties\n> > directory\n> > + * @prtcstns: Protocol settings mask from the properties directory\n> > + *\n> > + * Each domain exposes set of services it supports as collection\n> > of\n> > + * properties. For each service there will be one corresponding\n> > + * &struct tb_service. Service drivers are bound to these.\n> > + */\n> > +struct tb_service {\n> > +\tstruct device dev;\n> > +\tint id;\n> > +\tconst char *key;\n> > +\tu32 prtcid;\n> > +\tu32 prtcvers;\n> > +\tu32 prtcrevs;\n> > +\tu32 prtcstns;\n> > +};\n> > +\n> > +static inline struct tb_service *tb_service_get(struct tb_service\n> > *svc)\n> > +{\n> > +\tif (svc)\n> > +\t\tget_device(&svc->dev);\n> > +\treturn svc;\n> > +}\n> > +\n> > +static inline void tb_service_put(struct tb_service *svc)\n> > +{\n> > +\tif (svc)\n> > +\t\tput_device(&svc->dev);\n> > +}\n> > +\n> > +static inline bool tb_is_service(const struct device *dev)\n> > +{\n> > +\treturn dev->type == &tb_service_type;\n> > +}\n> > +\n> > +static inline struct tb_service *tb_to_service(struct device *dev)\n> > +{\n> > +\tif (tb_is_service(dev))\n> > +\t\treturn container_of(dev, struct tb_service, dev);\n> > +\treturn NULL;\n> > +}\n> > +\n> > +/**\n> > + * tb_service_driver - Thunderbolt service driver\n> > + * @driver: Driver structure\n> > + * @probe: Called when the driver is probed\n> > + * @remove: Called when the driver is removed (optional)\n> > + * @shutdown: Called at shutdown time to stop the service\n> > (optional)\n> > + * @id_table: Table of service identifiers the driver supports\n> > + */\n> > +struct tb_service_driver {\n> > +\tstruct device_driver driver;\n> > +\tint (*probe)(struct tb_service *svc, const struct\n> > tb_service_id *id);\n> > +\tvoid (*remove)(struct tb_service *svc);\n> > +\tvoid (*shutdown)(struct tb_service *svc);\n> > +\tconst struct tb_service_id *id_table;\n> > +};\n> > +\n> > +#define TB_SERVICE(key, id)\t\t\t\t\\\n> > +\t.match_flags = TBSVC_MATCH_PROTOCOL_KEY |\t\\\n> > +\t\t       TBSVC_MATCH_PROTOCOL_ID,\t\t\\\n> > +\t.protocol_key = (key),\t\t\t\t\\\n> > +\t.protocol_id = (id)\n> > +\n> > +int tb_register_service_driver(struct tb_service_driver *drv);\n> > +void tb_unregister_service_driver(struct tb_service_driver *drv);\n> > +\n> > +static inline void *tb_service_get_drvdata(const struct tb_service\n> > *svc)\n> > +{\n> > +\treturn dev_get_drvdata(&svc->dev);\n> > +}\n> > +\n> > +static inline void tb_service_set_drvdata(struct tb_service *svc,\n> > void *data)\n> > +{\n> > +\tdev_set_drvdata(&svc->dev, data);\n> > +}\n> > +\n> > +static inline struct tb_xdomain *tb_service_parent(struct\n> > tb_service\n> > *svc)\n> > +{\n> > +\treturn tb_to_xdomain(svc->dev.parent);\n> > +}\n> > +\n> >  #endif /* THUNDERBOLT_H_ */\n> > diff --git a/scripts/mod/devicetable-offsets.c\n> > b/scripts/mod/devicetable-offsets.c\n> > index e4d90e50f6fe..57263f2f8f2f 100644\n> > --- a/scripts/mod/devicetable-offsets.c\n> > +++ b/scripts/mod/devicetable-offsets.c\n> > @@ -206,5 +206,12 @@ int main(void)\n> >  \tDEVID_FIELD(fsl_mc_device_id, vendor);\n> >  \tDEVID_FIELD(fsl_mc_device_id, obj_type);\n> >  \n> > +\tDEVID(tb_service_id);\n> > +\tDEVID_FIELD(tb_service_id, match_flags);\n> > +\tDEVID_FIELD(tb_service_id, protocol_key);\n> > +\tDEVID_FIELD(tb_service_id, protocol_id);\n> > +\tDEVID_FIELD(tb_service_id, protocol_version);\n> > +\tDEVID_FIELD(tb_service_id, protocol_revision);\n> > +\n> >  \treturn 0;\n> >  }\n> > diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c\n> > index 29d6699d5a06..6ef6e63f96fd 100644\n> > --- a/scripts/mod/file2alias.c\n> > +++ b/scripts/mod/file2alias.c\n> > @@ -1301,6 +1301,31 @@ static int do_fsl_mc_entry(const char\n> > *filename, void *symval,\n> >  }\n> >  ADD_TO_DEVTABLE(\"fslmc\", fsl_mc_device_id, do_fsl_mc_entry);\n> >  \n> > +/* Looks like: tbsvc:kSpNvNrN */\n> > +static int do_tbsvc_entry(const char *filename, void *symval, char\n> > *alias)\n> > +{\n> > +\tDEF_FIELD(symval, tb_service_id, match_flags);\n> > +\tDEF_FIELD_ADDR(symval, tb_service_id, protocol_key);\n> > +\tDEF_FIELD(symval, tb_service_id, protocol_id);\n> > +\tDEF_FIELD(symval, tb_service_id, protocol_version);\n> > +\tDEF_FIELD(symval, tb_service_id, protocol_revision);\n> > +\n> > +\tstrcpy(alias, \"tbsvc:\");\n> > +\tif (match_flags & TBSVC_MATCH_PROTOCOL_KEY)\n> > +\t\tsprintf(alias + strlen(alias), \"k%s\",\n> > *protocol_key);\n> > +\telse\n> > +\t\tstrcat(alias + strlen(alias), \"k*\");\n> > +\tADD(alias, \"p\", match_flags & TBSVC_MATCH_PROTOCOL_ID,\n> > protocol_id);\n> > +\tADD(alias, \"v\", match_flags &\n> > TBSVC_MATCH_PROTOCOL_VERSION,\n> > +\t    protocol_version);\n> > +\tADD(alias, \"r\", match_flags &\n> > TBSVC_MATCH_PROTOCOL_REVISION,\n> > +\t    protocol_revision);\n> > +\n> > +\tadd_wildcard(alias);\n> > +\treturn 1;\n> > +}\n> > +ADD_TO_DEVTABLE(\"tbsvc\", tb_service_id, do_tbsvc_entry);\n> > +\n> >  /* Does namelen bytes of name exactly match the symbol? */\n> >  static bool sym_is(const char *name, unsigned namelen, const char\n> > *symbol)\n> >  {","headers":{"Return-Path":"<netdev-owner@vger.kernel.org>","X-Original-To":"patchwork-incoming@ozlabs.org","Delivered-To":"patchwork-incoming@ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=netdev-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","ext-mx03.extmail.prod.ext.phx2.redhat.com;\n\tdmarc=none (p=none dis=none) header.from=redhat.com","ext-mx03.extmail.prod.ext.phx2.redhat.com;\n\tspf=fail smtp.mailfrom=dcbw@redhat.com"],"Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xwrhG1CVmz9s7F\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 19 Sep 2017 02:15:26 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1756075AbdIRQPK (ORCPT <rfc822;patchwork-incoming@ozlabs.org>);\n\tMon, 18 Sep 2017 12:15:10 -0400","from mx1.redhat.com ([209.132.183.28]:50702 \"EHLO mx1.redhat.com\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1753310AbdIRQPH (ORCPT <rfc822;netdev@vger.kernel.org>);\n\tMon, 18 Sep 2017 12:15:07 -0400","from smtp.corp.redhat.com\n\t(int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby mx1.redhat.com (Postfix) with ESMTPS id A65117E445;\n\tMon, 18 Sep 2017 16:15:07 +0000 (UTC)","from ovpn-112-34.rdu2.redhat.com (ovpn-112-34.rdu2.redhat.com\n\t[10.10.112.34])\n\tby smtp.corp.redhat.com (Postfix) with ESMTP id A37005D971;\n\tMon, 18 Sep 2017 16:15:04 +0000 (UTC)"],"DMARC-Filter":"OpenDMARC Filter v1.3.2 mx1.redhat.com A65117E445","Message-ID":"<1505751303.24112.0.camel@redhat.com>","Subject":"Re: [PATCH 06/16] thunderbolt: Add support for XDomain discovery\n\tprotocol","From":"Dan Williams <dcbw@redhat.com>","To":"Mika Westerberg <mika.westerberg@linux.intel.com>,\n\tGreg Kroah-Hartman <gregkh@linuxfoundation.org>,\n\t\"David S . Miller\" <davem@davemloft.net>","Cc":"Andreas Noever <andreas.noever@gmail.com>,\n\tMichael Jamet <michael.jamet@intel.com>,\n\tYehezkel Bernat <yehezkel.bernat@intel.com>,\n\tAmir Levy <amir.jer.levy@intel.com>,\n\tMario.Limonciello@dell.com, Lukas Wunner <lukas@wunner.de>,\n\tAndy Shevchenko <andriy.shevchenko@linux.intel.com>,\n\tlinux-kernel@vger.kernel.org, netdev@vger.kernel.org","Date":"Mon, 18 Sep 2017 11:15:03 -0500","In-Reply-To":"<1505751137.11871.2.camel@redhat.com>","References":"<20170918153049.44185-1-mika.westerberg@linux.intel.com>\n\t<20170918153049.44185-7-mika.westerberg@linux.intel.com>\n\t<1505751137.11871.2.camel@redhat.com>","Content-Type":"text/plain; charset=\"UTF-8\"","Mime-Version":"1.0","Content-Transfer-Encoding":"8bit","X-Scanned-By":"MIMEDefang 2.79 on 10.5.11.14","X-Greylist":"Sender IP whitelisted, not delayed by milter-greylist-4.5.16\n\t(mx1.redhat.com [10.5.110.27]);\n\tMon, 18 Sep 2017 16:15:07 +0000 (UTC)","Sender":"netdev-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<netdev.vger.kernel.org>","X-Mailing-List":"netdev@vger.kernel.org"}},{"id":1770703,"web_url":"http://patchwork.ozlabs.org/comment/1770703/","msgid":"<20170919074056.GF4630@lahna.fi.intel.com>","list_archive_url":null,"date":"2017-09-19T07:40:56","subject":"Re: [PATCH 06/16] thunderbolt: Add support for XDomain discovery\n\tprotocol","submitter":{"id":14534,"url":"http://patchwork.ozlabs.org/api/people/14534/","name":"Mika Westerberg","email":"mika.westerberg@linux.intel.com"},"content":"On Mon, Sep 18, 2017 at 06:30:39PM +0300, Mika Westerberg wrote:\n> +What:\t\t/sys/bus/thunderbolt/devices/<xdomain>.<service>/key\n> +Date:\t\tDec 2017\n> +KernelVersion:\t4.14\n\nI forgot to update these to 4.15. I'll fix them in v2.","headers":{"Return-Path":"<netdev-owner@vger.kernel.org>","X-Original-To":"patchwork-incoming@ozlabs.org","Delivered-To":"patchwork-incoming@ozlabs.org","Authentication-Results":"ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=netdev-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xxFDW2mz6z9s7g\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 19 Sep 2017 17:41:15 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1751284AbdISHlE (ORCPT <rfc822;patchwork-incoming@ozlabs.org>);\n\tTue, 19 Sep 2017 03:41:04 -0400","from mga02.intel.com ([134.134.136.20]:47186 \"EHLO mga02.intel.com\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1750823AbdISHlD (ORCPT <rfc822;netdev@vger.kernel.org>);\n\tTue, 19 Sep 2017 03:41:03 -0400","from orsmga001.jf.intel.com ([10.7.209.18])\n\tby orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t19 Sep 2017 00:41:02 -0700","from lahna.fi.intel.com (HELO lahna) ([10.237.72.157])\n\tby orsmga001.jf.intel.com with SMTP; 19 Sep 2017 00:40:56 -0700","by lahna (sSMTP sendmail emulation);\n\tTue, 19 Sep 2017 10:40:56 +0300"],"X-ExtLoop1":"1","X-IronPort-AV":"E=Sophos;i=\"5.42,417,1500966000\"; d=\"scan'208\";a=\"1173628513\"","Date":"Tue, 19 Sep 2017 10:40:56 +0300","From":"Mika Westerberg <mika.westerberg@linux.intel.com>","To":"Greg Kroah-Hartman <gregkh@linuxfoundation.org>,\n\t\"David S . Miller\" <davem@davemloft.net>","Cc":"Andreas Noever <andreas.noever@gmail.com>,\n\tMichael Jamet <michael.jamet@intel.com>,\n\tYehezkel Bernat <yehezkel.bernat@intel.com>,\n\tAmir Levy <amir.jer.levy@intel.com>,\n\tMario.Limonciello@dell.com, Lukas Wunner <lukas@wunner.de>,\n\tAndy Shevchenko <andriy.shevchenko@linux.intel.com>,\n\tlinux-kernel@vger.kernel.org, netdev@vger.kernel.org","Subject":"Re: [PATCH 06/16] thunderbolt: Add support for XDomain discovery\n\tprotocol","Message-ID":"<20170919074056.GF4630@lahna.fi.intel.com>","References":"<20170918153049.44185-1-mika.westerberg@linux.intel.com>\n\t<20170918153049.44185-7-mika.westerberg@linux.intel.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=us-ascii","Content-Disposition":"inline","In-Reply-To":"<20170918153049.44185-7-mika.westerberg@linux.intel.com>","Organization":"Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo","User-Agent":"Mutt/1.8.3 (2017-05-23)","Sender":"netdev-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<netdev.vger.kernel.org>","X-Mailing-List":"netdev@vger.kernel.org"}}]