From patchwork Sat Nov 20 00:02:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557462 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz50W8Pz9t0G for ; Sat, 20 Nov 2021 11:03:13 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231818AbhKTAGI (ORCPT ); Fri, 19 Nov 2021 19:06:08 -0500 Received: from mga12.intel.com ([192.55.52.136]:5719 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229801AbhKTAGG (ORCPT ); Fri, 19 Nov 2021 19:06:06 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542391" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542391" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:55 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088328" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:55 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Dan Williams , Alison Schofield , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 01/23] cxl: Rename CXL_MEM to CXL_PCI Date: Fri, 19 Nov 2021 16:02:28 -0800 Message-Id: <20211120000250.1663391-2-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The cxl_mem module was renamed cxl_pci in commit 21e9f76733a8 ("cxl: Rename mem to pci"). In preparation for adding an ancillary driver for cxl_memdev devices (registered on the cxl bus by cxl_pci), go ahead and rename CONFIG_CXL_MEM to CONFIG_CXL_PCI. Free up the CXL_MEM name for that new driver to manage CXL.mem endpoint operations. Suggested-by: Dan Williams Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron Reviewed-by: Dan Williams --- Changes since RFCv2: - Reword commit message (Dan) - Reword Kconfig description (Dan) --- drivers/cxl/Kconfig | 23 ++++++++++++----------- drivers/cxl/Makefile | 2 +- 2 files changed, 13 insertions(+), 12 deletions(-) diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 67c91378f2dd..ef05e96f8f97 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -13,25 +13,26 @@ menuconfig CXL_BUS if CXL_BUS -config CXL_MEM - tristate "CXL.mem: Memory Devices" +config CXL_PCI + tristate "PCI manageability" default CXL_BUS help - The CXL.mem protocol allows a device to act as a provider of - "System RAM" and/or "Persistent Memory" that is fully coherent - as if the memory was attached to the typical CPU memory - controller. + The CXL specification defines a "CXL memory device" sub-class in the + PCI "memory controller" base class of devices. Device's identified by + this class code provide support for volatile and / or persistent + memory to be mapped into the system address map (Host-managed Device + Memory (HDM)). - Say 'y/m' to enable a driver that will attach to CXL.mem devices for - configuration and management primarily via the mailbox interface. See - Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification for more - details. + Say 'y/m' to enable a driver that will attach to CXL memory expander + devices enumerated by the memory device class code for configuration + and management primarily via the mailbox interface. See Chapter 2.3 + Type 3 CXL Device in the CXL 2.0 specification for more details. If unsure say 'm'. config CXL_MEM_RAW_COMMANDS bool "RAW Command Interface for Memory Devices" - depends on CXL_MEM + depends on CXL_PCI help Enable CXL RAW command interface. diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile index d1aaabc940f3..cf07ae6cea17 100644 --- a/drivers/cxl/Makefile +++ b/drivers/cxl/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_CXL_BUS) += core/ -obj-$(CONFIG_CXL_MEM) += cxl_pci.o +obj-$(CONFIG_CXL_PCI) += cxl_pci.o obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o obj-$(CONFIG_CXL_PMEM) += cxl_pmem.o From patchwork Sat Nov 20 00:02:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557464 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz55QVVz9t0G for ; Sat, 20 Nov 2021 11:03:13 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233041AbhKTAGK (ORCPT ); Fri, 19 Nov 2021 19:06:10 -0500 Received: from mga12.intel.com ([192.55.52.136]:5724 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231298AbhKTAGH (ORCPT ); Fri, 19 Nov 2021 19:06:07 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542393" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542393" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:55 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088333" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:55 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 02/23] cxl: Flesh out register names Date: Fri, 19 Nov 2021 16:02:29 -0800 Message-Id: <20211120000250.1663391-3-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Get a better naming scheme in place for upcoming additions. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron Reviewed-by: Dan Williams --- Changes since RFCv2: Use some abbreviations (Jonathan) Prefix everything with CXL (Jonathan) Remove new additions (Dan) Original discussion motivating this occurred here: https://lore.kernel.org/linux-pci/20210913190131.xiiszmno46qie7v5@intel.com/ --- drivers/cxl/pci.c | 14 +++++++------- drivers/cxl/pci.h | 19 ++++++++++--------- 2 files changed, 17 insertions(+), 16 deletions(-) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 8dc91fd3396a..a6ea9811a05b 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -403,10 +403,10 @@ static int cxl_map_regs(struct cxl_dev_state *cxlds, struct cxl_register_map *ma static void cxl_decode_regblock(u32 reg_lo, u32 reg_hi, struct cxl_register_map *map) { - map->block_offset = - ((u64)reg_hi << 32) | (reg_lo & CXL_REGLOC_ADDR_MASK); - map->barno = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo); - map->reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo); + map->block_offset = ((u64)reg_hi << 32) | + (reg_lo & CXL_DVSEC_REG_LOCATOR_BLOCK_OFF_LOW_MASK); + map->barno = FIELD_GET(CXL_DVSEC_REG_LOCATOR_BIR_MASK, reg_lo); + map->reg_type = FIELD_GET(CXL_DVSEC_REG_LOCATOR_BLOCK_ID_MASK, reg_lo); } /** @@ -427,15 +427,15 @@ static int cxl_find_regblock(struct pci_dev *pdev, enum cxl_regloc_type type, int regloc, i; regloc = pci_find_dvsec_capability(pdev, PCI_DVSEC_VENDOR_ID_CXL, - PCI_DVSEC_ID_CXL_REGLOC_DVSEC_ID); + CXL_DVSEC_REG_LOCATOR); if (!regloc) return -ENXIO; pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, ®loc_size); regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size); - regloc += PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET; - regblocks = (regloc_size - PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET) / 8; + regloc += CXL_DVSEC_REG_LOCATOR_BLOCK1_OFFSET; + regblocks = (regloc_size - CXL_DVSEC_REG_LOCATOR_BLOCK1_OFFSET) / 8; for (i = 0; i < regblocks; i++, regloc += 8) { u32 reg_lo, reg_hi; diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h index 7d3e4bf06b45..29b8eaef3a0a 100644 --- a/drivers/cxl/pci.h +++ b/drivers/cxl/pci.h @@ -7,17 +7,21 @@ /* * See section 8.1 Configuration Space Registers in the CXL 2.0 - * Specification + * Specification. Names are taken straight from the specification with "CXL" and + * "DVSEC" redundancies removed. When obvious, abbreviations may be used. */ #define PCI_DVSEC_HEADER1_LENGTH_MASK GENMASK(31, 20) #define PCI_DVSEC_VENDOR_ID_CXL 0x1E98 -#define PCI_DVSEC_ID_CXL 0x0 -#define PCI_DVSEC_ID_CXL_REGLOC_DVSEC_ID 0x8 -#define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET 0xC +/* CXL 2.0 8.1.3: PCIe DVSEC for CXL Device */ +#define CXL_DVSEC_PCIE_DEVICE 0 -/* BAR Indicator Register (BIR) */ -#define CXL_REGLOC_BIR_MASK GENMASK(2, 0) +/* CXL 2.0 8.1.9: Register Locator DVSEC */ +#define CXL_DVSEC_REG_LOCATOR 8 +#define CXL_DVSEC_REG_LOCATOR_BLOCK1_OFFSET 0xC +#define CXL_DVSEC_REG_LOCATOR_BIR_MASK GENMASK(2, 0) +#define CXL_DVSEC_REG_LOCATOR_BLOCK_ID_MASK GENMASK(15, 8) +#define CXL_DVSEC_REG_LOCATOR_BLOCK_OFF_LOW_MASK GENMASK(31, 16) /* Register Block Identifier (RBI) */ enum cxl_regloc_type { @@ -28,7 +32,4 @@ enum cxl_regloc_type { CXL_REGLOC_RBI_TYPES }; -#define CXL_REGLOC_RBI_MASK GENMASK(15, 8) -#define CXL_REGLOC_ADDR_MASK GENMASK(31, 16) - #endif /* __CXL_PCI_H__ */ From patchwork Sat Nov 20 00:02:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557466 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz63Bc1z9t0G for ; Sat, 20 Nov 2021 11:03:14 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236217AbhKTAGM (ORCPT ); Fri, 19 Nov 2021 19:06:12 -0500 Received: from mga12.intel.com ([192.55.52.136]:5719 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231911AbhKTAGJ (ORCPT ); Fri, 19 Nov 2021 19:06:09 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542395" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542395" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:55 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088338" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:55 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 03/23] cxl/pci: Extract device status check Date: Fri, 19 Nov 2021 16:02:30 -0800 Message-Id: <20211120000250.1663391-4-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The Memory Device Status register is inspected in the same way for at least two flows in the CXL Type 3 Memory Device Software Guide (Revision: 1.0): 2.13.9 Device discovery and mailbox ready sequence, and 2.13.10 Media ready sequence. Extract this common functionality for use by both. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron --- This patch did not exist in RFCv2 --- drivers/cxl/pci.c | 33 +++++++++++++++++++++++++-------- 1 file changed, 25 insertions(+), 8 deletions(-) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index a6ea9811a05b..6c8d09fb3a17 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -182,6 +182,27 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_dev_state *cxlds, return 0; } +/* + * Implements roughly the bottom half of Figure 42 of the CXL Type 3 Memory + * Device Software Guide + */ +static int check_device_status(struct cxl_dev_state *cxlds) +{ + const u64 md_status = readq(cxlds->regs.memdev + CXLMDEV_STATUS_OFFSET); + + if (md_status & CXLMDEV_DEV_FATAL) { + dev_err(cxlds->dev, "Fatal: replace device\n"); + return -EIO; + } + + if (md_status & CXLMDEV_FW_HALT) { + dev_err(cxlds->dev, "FWHalt: reset or replace device\n"); + return -EBUSY; + } + + return 0; +} + /** * cxl_pci_mbox_get() - Acquire exclusive access to the mailbox. * @cxlds: The device state to gain access to. @@ -231,17 +252,13 @@ static int cxl_pci_mbox_get(struct cxl_dev_state *cxlds) * Hardware shouldn't allow a ready status but also have failure bits * set. Spit out an error, this should be a bug report */ - rc = -EFAULT; - if (md_status & CXLMDEV_DEV_FATAL) { - dev_err(dev, "mbox: reported ready, but fatal\n"); + rc = check_device_status(cxlds); + if (rc) goto out; - } - if (md_status & CXLMDEV_FW_HALT) { - dev_err(dev, "mbox: reported ready, but halted\n"); - goto out; - } + if (CXLMDEV_RESET_NEEDED(md_status)) { dev_err(dev, "mbox: reported ready, but reset needed\n"); + rc = -EFAULT; goto out; } From patchwork Sat Nov 20 00:02:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557465 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz60c4nz9sS8 for ; Sat, 20 Nov 2021 11:03:14 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235020AbhKTAGL (ORCPT ); Fri, 19 Nov 2021 19:06:11 -0500 Received: from mga12.intel.com ([192.55.52.136]:5726 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231600AbhKTAGI (ORCPT ); Fri, 19 Nov 2021 19:06:08 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542396" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542396" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:56 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088342" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:55 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 04/23] cxl/pci: Implement Interface Ready Timeout Date: Fri, 19 Nov 2021 16:02:31 -0800 Message-Id: <20211120000250.1663391-5-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The original driver implementation used the doorbell timeout for the Mailbox Interface Ready bit to piggy back off of, since the latter doesn't have a defined timeout. This functionality, introduced in 8adaf747c9f0 ("cxl/mem: Find device capabilities"), can now be improved since a timeout has been defined with an ECN to the 2.0 spec. While devices implemented prior to the ECN could have an arbitrarily long wait and still be within spec, the max ECN value (256s) is chosen as the default for all devices. All vendors in the consortium agreed to this amount and so it is reasonable to assume no devices made will exceed this amount. Signed-off-by: Ben Widawsky --- This patch did not exist in RFCv2 --- drivers/cxl/pci.c | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 6c8d09fb3a17..2cef9fec8599 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -2,6 +2,7 @@ /* Copyright(c) 2020 Intel Corporation. All rights reserved. */ #include #include +#include #include #include #include @@ -298,6 +299,34 @@ static int cxl_pci_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *c static int cxl_pci_setup_mailbox(struct cxl_dev_state *cxlds) { const int cap = readl(cxlds->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET); + unsigned long timeout; + u64 md_status; + int rc; + + /* + * CXL 2.0 ECN "Add Mailbox Ready Time" defines a capability field to + * dictate how long to wait for the mailbox to become ready. For + * simplicity, and to handle devices that might have been implemented + * prior to the ECN, wait the max amount of time no matter what the + * device says. + */ + timeout = jiffies + 256 * HZ; + + rc = check_device_status(cxlds); + if (rc) + return rc; + + do { + md_status = readq(cxlds->regs.memdev + CXLMDEV_STATUS_OFFSET); + if (md_status & CXLMDEV_MBOX_IF_READY) + break; + if (msleep_interruptible(100)) + break; + } while (!time_after(jiffies, timeout)); + + /* It's assumed that once the interface is ready, it will remain ready. */ + if (!(md_status & CXLMDEV_MBOX_IF_READY)) + return -EIO; cxlds->mbox_send = cxl_pci_mbox_send; cxlds->payload_size = From patchwork Sat Nov 20 00:02:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557463 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz536g3z9sS8 for ; Sat, 20 Nov 2021 11:03:13 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231639AbhKTAGJ (ORCPT ); Fri, 19 Nov 2021 19:06:09 -0500 Received: from mga12.intel.com ([192.55.52.136]:5719 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231466AbhKTAGH (ORCPT ); Fri, 19 Nov 2021 19:06:07 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542398" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542398" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:56 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088348" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:56 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 05/23] cxl/pci: Don't poll doorbell for mailbox access Date: Fri, 19 Nov 2021 16:02:32 -0800 Message-Id: <20211120000250.1663391-6-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The expectation is that the mailbox interface ready bit is the first step in access through the mailbox interface. Therefore, waiting for the doorbell busy bit to be clear would imply that the mailbox interface is ready. The original driver implementation used the doorbell timeout for the Mailbox Interface Ready bit to piggyback off of, since the latter doesn't have a defined timeout (introduced in 8adaf747c9f0 ("cxl/mem: Find device capabilities"), a timeout has since been defined with an ECN to the 2.0 spec). With the current driver waiting for mailbox interface ready as a part of probe() it's no longer necessary to use the piggyback. With the piggybacking no longer necessary it doesn't make sense to check doorbell status when acquiring the mailbox. It will be checked during the normal mailbox exchange protocol. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron --- This patch did not exist in RFCv2 --- drivers/cxl/pci.c | 25 ++++++------------------- 1 file changed, 6 insertions(+), 19 deletions(-) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 2cef9fec8599..869b4fc18e27 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -221,27 +221,14 @@ static int cxl_pci_mbox_get(struct cxl_dev_state *cxlds) /* * XXX: There is some amount of ambiguity in the 2.0 version of the spec - * around the mailbox interface ready (8.2.8.5.1.1). The purpose of the + * around the mailbox interface ready (8.2.8.5.1.1). The purpose of the * bit is to allow firmware running on the device to notify the driver - * that it's ready to receive commands. It is unclear if the bit needs - * to be read for each transaction mailbox, ie. the firmware can switch - * it on and off as needed. Second, there is no defined timeout for - * mailbox ready, like there is for the doorbell interface. - * - * Assumptions: - * 1. The firmware might toggle the Mailbox Interface Ready bit, check - * it for every command. - * - * 2. If the doorbell is clear, the firmware should have first set the - * Mailbox Interface Ready bit. Therefore, waiting for the doorbell - * to be ready is sufficient. + * that it's ready to receive commands. The spec does not clearly define + * under what conditions the bit may get set or cleared. As of the 2.0 + * base specification there was no defined timeout for mailbox ready, + * like there is for the doorbell interface. This was fixed with an ECN, + * but it's possible early devices implemented this before the ECN. */ - rc = cxl_pci_mbox_wait_for_doorbell(cxlds); - if (rc) { - dev_warn(dev, "Mailbox interface not ready\n"); - goto out; - } - md_status = readq(cxlds->regs.memdev + CXLMDEV_STATUS_OFFSET); if (!(md_status & CXLMDEV_MBOX_IF_READY && CXLMDEV_READY(md_status))) { dev_err(dev, "mbox: reported doorbell ready, but not mbox ready\n"); From patchwork Sat Nov 20 00:02:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557467 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz65bgqz9sS8 for ; Sat, 20 Nov 2021 11:03:14 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236338AbhKTAGN (ORCPT ); Fri, 19 Nov 2021 19:06:13 -0500 Received: from mga12.intel.com ([192.55.52.136]:5724 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232648AbhKTAGJ (ORCPT ); Fri, 19 Nov 2021 19:06:09 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542399" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542399" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:56 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088351" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:56 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 06/23] cxl/pci: Don't check media status for mbox access Date: Fri, 19 Nov 2021 16:02:33 -0800 Message-Id: <20211120000250.1663391-7-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Media status is necessary for using HDM contained in a CXL device but is not needed for mailbox accesses. Therefore remove this check. It will be necessary to have this check (in a different place) when enabling HDM. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron --- This patch did not exist in RFCv2 --- drivers/cxl/pci.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 869b4fc18e27..711bf4514480 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -230,7 +230,7 @@ static int cxl_pci_mbox_get(struct cxl_dev_state *cxlds) * but it's possible early devices implemented this before the ECN. */ md_status = readq(cxlds->regs.memdev + CXLMDEV_STATUS_OFFSET); - if (!(md_status & CXLMDEV_MBOX_IF_READY && CXLMDEV_READY(md_status))) { + if (!(md_status & CXLMDEV_MBOX_IF_READY)) { dev_err(dev, "mbox: reported doorbell ready, but not mbox ready\n"); rc = -EBUSY; goto out; From patchwork Sat Nov 20 00:02:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557468 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz711chz9t0G for ; Sat, 20 Nov 2021 11:03:15 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236349AbhKTAGN (ORCPT ); Fri, 19 Nov 2021 19:06:13 -0500 Received: from mga12.intel.com ([192.55.52.136]:5726 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232763AbhKTAGJ (ORCPT ); Fri, 19 Nov 2021 19:06:09 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542401" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542401" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:56 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088357" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:56 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 07/23] cxl/pci: Add new DVSEC definitions Date: Fri, 19 Nov 2021 16:02:34 -0800 Message-Id: <20211120000250.1663391-8-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org While the new definitions are yet necessary at this point, they are introduced at this point to help solidify the newly minted schema for naming registers. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron Reviewed-by: Dan Williams --- This was split from https://lore.kernel.org/linux-cxl/20211103170552.55ae5u7uvurkync6@intel.com/T/#u per Dan's request. --- drivers/cxl/pci.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h index 29b8eaef3a0a..8ae2b4adc59d 100644 --- a/drivers/cxl/pci.h +++ b/drivers/cxl/pci.h @@ -16,6 +16,21 @@ /* CXL 2.0 8.1.3: PCIe DVSEC for CXL Device */ #define CXL_DVSEC_PCIE_DEVICE 0 +/* CXL 2.0 8.1.4: Non-CXL Function Map DVSEC */ +#define CXL_DVSEC_FUNCTION_MAP 2 + +/* CXL 2.0 8.1.5: CXL 2.0 Extensions DVSEC for Ports */ +#define CXL_DVSEC_PORT_EXTENSIONS 3 + +/* CXL 2.0 8.1.6: GPF DVSEC for CXL Port */ +#define CXL_DVSEC_PORT_GPF 4 + +/* CXL 2.0 8.1.7: GPF DVSEC for CXL Device */ +#define CXL_DVSEC_DEVICE_GPF 5 + +/* CXL 2.0 8.1.8: PCIe DVSEC for Flex Bus Port */ +#define CXL_DVSEC_PCIE_FLEXBUS_PORT 7 + /* CXL 2.0 8.1.9: Register Locator DVSEC */ #define CXL_DVSEC_REG_LOCATOR 8 #define CXL_DVSEC_REG_LOCATOR_BLOCK1_OFFSET 0xC From patchwork Sat Nov 20 00:02:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557470 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz75xs1z9t0G for ; Sat, 20 Nov 2021 11:03:15 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231600AbhKTAGO (ORCPT ); Fri, 19 Nov 2021 19:06:14 -0500 Received: from mga12.intel.com ([192.55.52.136]:5724 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231466AbhKTAGK (ORCPT ); Fri, 19 Nov 2021 19:06:10 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542403" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542403" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:57 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088360" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:56 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 08/23] cxl/acpi: Map component registers for Root Ports Date: Fri, 19 Nov 2021 16:02:35 -0800 Message-Id: <20211120000250.1663391-9-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org This implements the TODO in cxl_acpi for mapping component registers. cxl_acpi becomes the second consumer of CXL register block enumeration (cxl_pci being the first). Moving the functionality to cxl_core allows both of these drivers to use the functionality. Equally importantly it allows cxl_core to use the functionality in the future. CXL 2.0 root ports are similar to CXL 2.0 Downstream Ports with the main distinction being they're a part of the CXL 2.0 host bridge. While mapping their component registers is not immediately useful for the CXL drivers, the movement of register block enumeration into core is a vital step towards HDM decoder programming. Signed-off-by: Ben Widawsky --- Changes since RFCv2: - Squash commits together (Dan) - Reword commit message to account for above. --- drivers/cxl/acpi.c | 10 ++++++-- drivers/cxl/core/regs.c | 54 +++++++++++++++++++++++++++++++++++++++++ drivers/cxl/cxl.h | 4 +++ drivers/cxl/pci.c | 52 --------------------------------------- drivers/cxl/pci.h | 4 +++ 5 files changed, 70 insertions(+), 54 deletions(-) diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index 3163167ecc3a..7cfa8b568013 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -7,6 +7,7 @@ #include #include #include "cxl.h" +#include "pci.h" /* Encode defined in CXL 2.0 8.2.5.12.7 HDM Decoder Control Register */ #define CFMWS_INTERLEAVE_WAYS(x) (1 << (x)->interleave_ways) @@ -134,11 +135,13 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg, __mock int match_add_root_ports(struct pci_dev *pdev, void *data) { + resource_size_t creg = CXL_RESOURCE_NONE; struct cxl_walk_context *ctx = data; struct pci_bus *root_bus = ctx->root; struct cxl_port *port = ctx->port; int type = pci_pcie_type(pdev); struct device *dev = ctx->dev; + struct cxl_register_map map; u32 lnkcap, port_num; int rc; @@ -152,9 +155,12 @@ __mock int match_add_root_ports(struct pci_dev *pdev, void *data) &lnkcap) != PCIBIOS_SUCCESSFUL) return 0; - /* TODO walk DVSEC to find component register base */ + rc = cxl_find_regblock(pdev, CXL_REGLOC_RBI_COMPONENT, &map); + if (!rc) + creg = cxl_reg_block(pdev, &map); + port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap); - rc = cxl_add_dport(port, &pdev->dev, port_num, CXL_RESOURCE_NONE); + rc = cxl_add_dport(port, &pdev->dev, port_num, creg); if (rc) { ctx->error = rc; return rc; diff --git a/drivers/cxl/core/regs.c b/drivers/cxl/core/regs.c index e37e23bf4355..41a0245867ea 100644 --- a/drivers/cxl/core/regs.c +++ b/drivers/cxl/core/regs.c @@ -5,6 +5,7 @@ #include #include #include +#include /** * DOC: cxl registers @@ -247,3 +248,56 @@ int cxl_map_device_regs(struct pci_dev *pdev, return 0; } EXPORT_SYMBOL_NS_GPL(cxl_map_device_regs, CXL); + +static void cxl_decode_regblock(u32 reg_lo, u32 reg_hi, + struct cxl_register_map *map) +{ + map->block_offset = ((u64)reg_hi << 32) | + (reg_lo & CXL_DVSEC_REG_LOCATOR_BLOCK_OFF_LOW_MASK); + map->barno = FIELD_GET(CXL_DVSEC_REG_LOCATOR_BIR_MASK, reg_lo); + map->reg_type = FIELD_GET(CXL_DVSEC_REG_LOCATOR_BLOCK_ID_MASK, reg_lo); +} + +/** + * cxl_find_regblock() - Locate register blocks by type + * @pdev: The CXL PCI device to enumerate. + * @type: Register Block Indicator id + * @map: Enumeration output, clobbered on error + * + * Return: 0 if register block enumerated, negative error code otherwise + * + * A CXL DVSEC may additional point one or more register blocks, search + * for them by @type. + */ +int cxl_find_regblock(struct pci_dev *pdev, enum cxl_regloc_type type, + struct cxl_register_map *map) +{ + u32 regloc_size, regblocks; + int regloc, i; + + regloc = pci_find_dvsec_capability(pdev, PCI_DVSEC_VENDOR_ID_CXL, + CXL_DVSEC_REG_LOCATOR); + if (!regloc) + return -ENXIO; + + pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, ®loc_size); + regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size); + + regloc += CXL_DVSEC_REG_LOCATOR_BLOCK1_OFFSET; + regblocks = (regloc_size - CXL_DVSEC_REG_LOCATOR_BLOCK1_OFFSET) / 8; + + for (i = 0; i < regblocks; i++, regloc += 8) { + u32 reg_lo, reg_hi; + + pci_read_config_dword(pdev, regloc, ®_lo); + pci_read_config_dword(pdev, regloc + 4, ®_hi); + + cxl_decode_regblock(reg_lo, reg_hi, map); + + if (map->reg_type == type) + return 0; + } + + return -ENODEV; +} +EXPORT_SYMBOL_NS_GPL(cxl_find_regblock, CXL); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index ab4596f0b751..7150a9694f66 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -145,6 +145,10 @@ int cxl_map_device_regs(struct pci_dev *pdev, struct cxl_device_regs *regs, struct cxl_register_map *map); +enum cxl_regloc_type; +int cxl_find_regblock(struct pci_dev *pdev, enum cxl_regloc_type type, + struct cxl_register_map *map); + #define CXL_RESOURCE_NONE ((resource_size_t) -1) #define CXL_TARGET_STRLEN 20 diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 711bf4514480..d2c743a31b0c 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -433,58 +433,6 @@ static int cxl_map_regs(struct cxl_dev_state *cxlds, struct cxl_register_map *ma return 0; } -static void cxl_decode_regblock(u32 reg_lo, u32 reg_hi, - struct cxl_register_map *map) -{ - map->block_offset = ((u64)reg_hi << 32) | - (reg_lo & CXL_DVSEC_REG_LOCATOR_BLOCK_OFF_LOW_MASK); - map->barno = FIELD_GET(CXL_DVSEC_REG_LOCATOR_BIR_MASK, reg_lo); - map->reg_type = FIELD_GET(CXL_DVSEC_REG_LOCATOR_BLOCK_ID_MASK, reg_lo); -} - -/** - * cxl_find_regblock() - Locate register blocks by type - * @pdev: The CXL PCI device to enumerate. - * @type: Register Block Indicator id - * @map: Enumeration output, clobbered on error - * - * Return: 0 if register block enumerated, negative error code otherwise - * - * A CXL DVSEC may point to one or more register blocks, search for them - * by @type. - */ -static int cxl_find_regblock(struct pci_dev *pdev, enum cxl_regloc_type type, - struct cxl_register_map *map) -{ - u32 regloc_size, regblocks; - int regloc, i; - - regloc = pci_find_dvsec_capability(pdev, PCI_DVSEC_VENDOR_ID_CXL, - CXL_DVSEC_REG_LOCATOR); - if (!regloc) - return -ENXIO; - - pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, ®loc_size); - regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size); - - regloc += CXL_DVSEC_REG_LOCATOR_BLOCK1_OFFSET; - regblocks = (regloc_size - CXL_DVSEC_REG_LOCATOR_BLOCK1_OFFSET) / 8; - - for (i = 0; i < regblocks; i++, regloc += 8) { - u32 reg_lo, reg_hi; - - pci_read_config_dword(pdev, regloc, ®_lo); - pci_read_config_dword(pdev, regloc + 4, ®_hi); - - cxl_decode_regblock(reg_lo, reg_hi, map); - - if (map->reg_type == type) - return 0; - } - - return -ENODEV; -} - static int cxl_setup_regs(struct pci_dev *pdev, enum cxl_regloc_type type, struct cxl_register_map *map) { diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h index 8ae2b4adc59d..a4b506bb37d1 100644 --- a/drivers/cxl/pci.h +++ b/drivers/cxl/pci.h @@ -47,4 +47,8 @@ enum cxl_regloc_type { CXL_REGLOC_RBI_TYPES }; +#define cxl_reg_block(pdev, map) \ + ((resource_size_t)(pci_resource_start(pdev, (map)->barno) + \ + (map)->block_offset)) + #endif /* __CXL_PCI_H__ */ From patchwork Sat Nov 20 00:02:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557469 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz73f4Zz9sS8 for ; Sat, 20 Nov 2021 11:03:15 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232763AbhKTAGO (ORCPT ); Fri, 19 Nov 2021 19:06:14 -0500 Received: from mga12.intel.com ([192.55.52.136]:5719 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229523AbhKTAGJ (ORCPT ); Fri, 19 Nov 2021 19:06:09 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542404" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542404" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:57 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088364" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:57 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Dan Williams , Alison Schofield , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 09/23] cxl: Introduce module_cxl_driver Date: Fri, 19 Nov 2021 16:02:36 -0800 Message-Id: <20211120000250.1663391-10-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Many CXL drivers simply want to register and unregister themselves. module_driver already supported this. A simple wrapper around that reduces a decent amount of boilerplate in upcoming patches. Suggested-by: Dan Williams Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron Reviewed-by: Dan Williams --- drivers/cxl/cxl.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 7150a9694f66..d39d45f4a770 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -308,6 +308,9 @@ int __cxl_driver_register(struct cxl_driver *cxl_drv, struct module *owner, #define cxl_driver_register(x) __cxl_driver_register(x, THIS_MODULE, KBUILD_MODNAME) void cxl_driver_unregister(struct cxl_driver *cxl_drv); +#define module_cxl_driver(__cxl_driver) \ + module_driver(__cxl_driver, cxl_driver_register, cxl_driver_unregister) + #define CXL_DEVICE_NVDIMM_BRIDGE 1 #define CXL_DEVICE_NVDIMM 2 From patchwork Sat Nov 20 00:02:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557473 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz869kWz9sS8 for ; Sat, 20 Nov 2021 11:03:16 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236389AbhKTAGQ (ORCPT ); Fri, 19 Nov 2021 19:06:16 -0500 Received: from mga12.intel.com ([192.55.52.136]:5724 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231298AbhKTAGK (ORCPT ); Fri, 19 Nov 2021 19:06:10 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542405" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542405" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:57 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088370" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:57 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 10/23] cxl/core: Convert decoder range to resource Date: Fri, 19 Nov 2021 16:02:37 -0800 Message-Id: <20211120000250.1663391-11-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org CXL decoders manage address ranges in a hierarchical fashion whereby a leaf is a unique subregion of its parent decoder (midlevel or root). It therefore makes sense to use the resource API for handling this. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron Reviewed-by: Dan Williams --- Changes since RFCv2 - Switch to DEFINE_RES_MEM from NAMED variant (Dan) - Differentiate CFMWS resources and other decoder resources (Ben) - Make decoder resources be range, nor resource (Dan) - Set decoder name in cxl_decoder_add() (Dan) --- drivers/cxl/acpi.c | 16 ++++++---------- drivers/cxl/core/bus.c | 19 +++++++++++++++++-- drivers/cxl/cxl.h | 8 ++++++-- 3 files changed, 29 insertions(+), 14 deletions(-) diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index 7cfa8b568013..3415184a2e61 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -108,10 +108,8 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg, cxld->flags = cfmws_to_decoder_flags(cfmws->restrictions); cxld->target_type = CXL_DECODER_EXPANDER; - cxld->range = (struct range){ - .start = cfmws->base_hpa, - .end = cfmws->base_hpa + cfmws->window_size - 1, - }; + cxld->platform_res = (struct resource)DEFINE_RES_MEM(cfmws->base_hpa, + cfmws->window_size); cxld->interleave_ways = CFMWS_INTERLEAVE_WAYS(cfmws); cxld->interleave_granularity = CFMWS_INTERLEAVE_GRANULARITY(cfmws); @@ -127,8 +125,9 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg, return 0; } dev_dbg(dev, "add: %s node: %d range %#llx-%#llx\n", - dev_name(&cxld->dev), phys_to_target_node(cxld->range.start), - cfmws->base_hpa, cfmws->base_hpa + cfmws->window_size - 1); + dev_name(&cxld->dev), + phys_to_target_node(cxld->platform_res.start), cfmws->base_hpa, + cfmws->base_hpa + cfmws->window_size - 1); return 0; } @@ -267,10 +266,7 @@ static int add_host_bridge_uport(struct device *match, void *arg) cxld->interleave_ways = 1; cxld->interleave_granularity = PAGE_SIZE; cxld->target_type = CXL_DECODER_EXPANDER; - cxld->range = (struct range) { - .start = 0, - .end = -1, - }; + cxld->platform_res = (struct resource)DEFINE_RES_MEM(0, 0); device_lock(&port->dev); dport = list_first_entry(&port->dports, typeof(*dport), list); diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 17a4fff029f8..8e80e85350b1 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -46,8 +46,14 @@ static ssize_t start_show(struct device *dev, struct device_attribute *attr, char *buf) { struct cxl_decoder *cxld = to_cxl_decoder(dev); + u64 start = 0; - return sysfs_emit(buf, "%#llx\n", cxld->range.start); + if (is_root_decoder(dev)) + start = cxld->platform_res.start; + else + start = cxld->decoder_range.start; + + return sysfs_emit(buf, "%#llx\n", start); } static DEVICE_ATTR_RO(start); @@ -55,8 +61,14 @@ static ssize_t size_show(struct device *dev, struct device_attribute *attr, char *buf) { struct cxl_decoder *cxld = to_cxl_decoder(dev); + u64 size = 0; - return sysfs_emit(buf, "%#llx\n", range_len(&cxld->range)); + if (is_root_decoder(dev)) + size = resource_size(&cxld->platform_res); + else + size = range_len(&cxld->decoder_range); + + return sysfs_emit(buf, "%#llx\n", size); } static DEVICE_ATTR_RO(size); @@ -548,6 +560,9 @@ int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map) if (rc) return rc; + if (is_root_decoder(dev)) + cxld->platform_res.name = dev_name(dev); + return device_add(dev); } EXPORT_SYMBOL_NS_GPL(cxl_decoder_add, CXL); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index d39d45f4a770..ad816fb5bdcc 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -179,7 +179,8 @@ enum cxl_decoder_type { * struct cxl_decoder - CXL address range decode configuration * @dev: this decoder's device * @id: kernel device name id - * @range: address range considered by this decoder + * @platform_res: address space resources considered by root decoder + * @decoder_range: address space resources considered by midlevel decoder * @interleave_ways: number of cxl_dports in this decode * @interleave_granularity: data stride per dport * @target_type: accelerator vs expander (type2 vs type3) selector @@ -190,7 +191,10 @@ enum cxl_decoder_type { struct cxl_decoder { struct device dev; int id; - struct range range; + union { + struct resource platform_res; + struct range decoder_range; + }; int interleave_ways; int interleave_granularity; enum cxl_decoder_type target_type; From patchwork Sat Nov 20 00:02:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557472 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz83lq4z9t0G for ; Sat, 20 Nov 2021 11:03:16 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231585AbhKTAGP (ORCPT ); Fri, 19 Nov 2021 19:06:15 -0500 Received: from mga12.intel.com ([192.55.52.136]:5719 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233497AbhKTAGK (ORCPT ); Fri, 19 Nov 2021 19:06:10 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542406" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542406" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:58 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088374" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:57 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 11/23] cxl/core: Document and tighten up decoder APIs Date: Fri, 19 Nov 2021 16:02:38 -0800 Message-Id: <20211120000250.1663391-12-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Since the code to add decoders for switches and endpoints is on the horizon it helps to have properly documented APIs. In addition, the decoder APIs will never need to support a negative count for downstream targets as the spec explicitly starts numbering them at 1, ie. even 0 is an "invalid" value which can be used as a sentinel. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron Reviewed-by: Dan Williams --- This is respun from a previous incantation here: https://lore.kernel.org/linux-cxl/20210915155946.308339-1-ben.widawsky@intel.com/ --- drivers/cxl/core/bus.c | 33 +++++++++++++++++++++++++++++++-- drivers/cxl/cxl.h | 3 ++- 2 files changed, 33 insertions(+), 3 deletions(-) diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 8e80e85350b1..1ee12a60f3f4 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -495,7 +495,20 @@ static int decoder_populate_targets(struct cxl_decoder *cxld, return rc; } -struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, int nr_targets) +/** + * cxl_decoder_alloc - Allocate a new CXL decoder + * @port: owning port of this decoder + * @nr_targets: downstream targets accessible by this decoder. All upstream + * ports and root ports must have at least 1 target. + * + * A port should contain one or more decoders. Each of those decoders enable + * some address space for CXL.mem utilization. A decoder is expected to be + * configured by the caller before registering. + * + * Return: A new cxl decoder to be registered by cxl_decoder_add() + */ +struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, + unsigned int nr_targets) { struct cxl_decoder *cxld, cxld_const_init = { .nr_targets = nr_targets, @@ -503,7 +516,7 @@ struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, int nr_targets) struct device *dev; int rc = 0; - if (nr_targets > CXL_DECODER_MAX_INTERLEAVE || nr_targets < 1) + if (nr_targets > CXL_DECODER_MAX_INTERLEAVE || nr_targets == 0) return ERR_PTR(-EINVAL); cxld = kzalloc(struct_size(cxld, target, nr_targets), GFP_KERNEL); @@ -535,6 +548,22 @@ struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, int nr_targets) } EXPORT_SYMBOL_NS_GPL(cxl_decoder_alloc, CXL); +/** + * cxl_decoder_add - Add a decoder with targets + * @cxld: The cxl decoder allocated by cxl_decoder_alloc() + * @target_map: A list of downstream ports that this decoder can direct memory + * traffic to. These numbers should correspond with the port number + * in the PCIe Link Capabilities structure. + * + * Certain types of decoders may not have any targets. The main example of this + * is an endpoint device. A more awkward example is a hostbridge whose root + * ports get hot added (technically possible, though unlikely). + * + * Context: Process context. Takes and releases the cxld's device lock. + * + * Return: Negative error code if the decoder wasn't properly configured; else + * returns 0. + */ int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map) { struct cxl_port *port; diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index ad816fb5bdcc..b66ed8f241c6 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -288,7 +288,8 @@ int cxl_add_dport(struct cxl_port *port, struct device *dport, int port_id, struct cxl_decoder *to_cxl_decoder(struct device *dev); bool is_root_decoder(struct device *dev); -struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, int nr_targets); +struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, + unsigned int nr_targets); int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map); int cxl_decoder_autoremove(struct device *host, struct cxl_decoder *cxld); From patchwork Sat Nov 20 00:02:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557471 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz8198sz9sS8 for ; Sat, 20 Nov 2021 11:03:16 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236288AbhKTAGP (ORCPT ); Fri, 19 Nov 2021 19:06:15 -0500 Received: from mga12.intel.com ([192.55.52.136]:5726 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231585AbhKTAGK (ORCPT ); Fri, 19 Nov 2021 19:06:10 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542408" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542408" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:58 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088377" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:58 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 12/23] cxl: Introduce endpoint decoders Date: Fri, 19 Nov 2021 16:02:39 -0800 Message-Id: <20211120000250.1663391-13-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Endpoints have decoders too. It is useful to share the same infrastructure from cxl_core. Endpoints do not have dports (downstream targets), only the underlying physical medium. As a result, some special casing is needed. There is no functional change introduced yet as endpoints don't actually enumerate decoders yet. Signed-off-by: Ben Widawsky --- drivers/cxl/core/bus.c | 41 +++++++++++++++++++++++++++++++++-------- 1 file changed, 33 insertions(+), 8 deletions(-) diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 1ee12a60f3f4..16b15f54fb62 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -187,6 +187,12 @@ static const struct attribute_group *cxl_decoder_switch_attribute_groups[] = { NULL, }; +static const struct attribute_group *cxl_decoder_endpoint_attribute_groups[] = { + &cxl_decoder_base_attribute_group, + &cxl_base_attribute_group, + NULL, +}; + static void cxl_decoder_release(struct device *dev) { struct cxl_decoder *cxld = to_cxl_decoder(dev); @@ -196,6 +202,12 @@ static void cxl_decoder_release(struct device *dev) kfree(cxld); } +static const struct device_type cxl_decoder_endpoint_type = { + .name = "cxl_decoder_endpoint", + .release = cxl_decoder_release, + .groups = cxl_decoder_endpoint_attribute_groups, +}; + static const struct device_type cxl_decoder_switch_type = { .name = "cxl_decoder_switch", .release = cxl_decoder_release, @@ -208,6 +220,11 @@ static const struct device_type cxl_decoder_root_type = { .groups = cxl_decoder_root_attribute_groups, }; +static bool is_endpoint_decoder(struct device *dev) +{ + return dev->type == &cxl_decoder_endpoint_type; +} + bool is_root_decoder(struct device *dev) { return dev->type == &cxl_decoder_root_type; @@ -499,7 +516,9 @@ static int decoder_populate_targets(struct cxl_decoder *cxld, * cxl_decoder_alloc - Allocate a new CXL decoder * @port: owning port of this decoder * @nr_targets: downstream targets accessible by this decoder. All upstream - * ports and root ports must have at least 1 target. + * ports and root ports must have at least 1 target. Endpoint + * devices will have 0 targets. Callers wishing to register an + * endpoint device should specify 0. * * A port should contain one or more decoders. Each of those decoders enable * some address space for CXL.mem utilization. A decoder is expected to be @@ -516,7 +535,7 @@ struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, struct device *dev; int rc = 0; - if (nr_targets > CXL_DECODER_MAX_INTERLEAVE || nr_targets == 0) + if (nr_targets > CXL_DECODER_MAX_INTERLEAVE) return ERR_PTR(-EINVAL); cxld = kzalloc(struct_size(cxld, target, nr_targets), GFP_KERNEL); @@ -535,8 +554,11 @@ struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, dev->parent = &port->dev; dev->bus = &cxl_bus_type; + /* Endpoints don't have a target list */ + if (nr_targets == 0) + dev->type = &cxl_decoder_endpoint_type; /* root ports do not have a cxl_port_type parent */ - if (port->dev.parent->type == &cxl_port_type) + else if (port->dev.parent->type == &cxl_port_type) dev->type = &cxl_decoder_switch_type; else dev->type = &cxl_decoder_root_type; @@ -579,12 +601,15 @@ int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map) if (cxld->interleave_ways < 1) return -EINVAL; - port = to_cxl_port(cxld->dev.parent); - rc = decoder_populate_targets(cxld, port, target_map); - if (rc) - return rc; - dev = &cxld->dev; + + port = to_cxl_port(cxld->dev.parent); + if (!is_endpoint_decoder(dev)) { + rc = decoder_populate_targets(cxld, port, target_map); + if (rc) + return rc; + } + rc = dev_set_name(dev, "decoder%d.%d", port->id, cxld->id); if (rc) return rc; From patchwork Sat Nov 20 00:02:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557482 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HwtzH017fz9sS8 for ; Sat, 20 Nov 2021 11:03:22 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236249AbhKTAGW (ORCPT ); Fri, 19 Nov 2021 19:06:22 -0500 Received: from mga12.intel.com ([192.55.52.136]:5726 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236285AbhKTAGM (ORCPT ); Fri, 19 Nov 2021 19:06:12 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542410" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542410" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:58 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088382" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:58 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 13/23] cxl/core: Move target population locking to caller Date: Fri, 19 Nov 2021 16:02:40 -0800 Message-Id: <20211120000250.1663391-14-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org In preparation for a port driver that enumerates a descendant port + decoder hierarchy, arrange for an unlocked version of cxl_decoder_add(). Otherwise a port-driver that adds a child decoder will deadlock on the device_lock() in ->probe(). Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron Reviewed-by: Dan Williams --- Changes since RFCv2: - Reword commit message (Dan) - Move decoder API changes into this patch (Dan) --- drivers/cxl/core/bus.c | 59 +++++++++++++++++++++++++++++++----------- drivers/cxl/cxl.h | 1 + 2 files changed, 45 insertions(+), 15 deletions(-) diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 16b15f54fb62..cd6fe7823c69 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -487,28 +487,22 @@ static int decoder_populate_targets(struct cxl_decoder *cxld, { int rc = 0, i; + device_lock_assert(&port->dev); + if (!target_map) return 0; - device_lock(&port->dev); - if (list_empty(&port->dports)) { - rc = -EINVAL; - goto out_unlock; - } + if (list_empty(&port->dports)) + return -EINVAL; for (i = 0; i < cxld->nr_targets; i++) { struct cxl_dport *dport = find_dport(port, target_map[i]); - if (!dport) { - rc = -ENXIO; - goto out_unlock; - } + if (!dport) + return -ENXIO; cxld->target[i] = dport; } -out_unlock: - device_unlock(&port->dev); - return rc; } @@ -571,7 +565,7 @@ struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, EXPORT_SYMBOL_NS_GPL(cxl_decoder_alloc, CXL); /** - * cxl_decoder_add - Add a decoder with targets + * cxl_decoder_add_locked - Add a decoder with targets * @cxld: The cxl decoder allocated by cxl_decoder_alloc() * @target_map: A list of downstream ports that this decoder can direct memory * traffic to. These numbers should correspond with the port number @@ -581,12 +575,14 @@ EXPORT_SYMBOL_NS_GPL(cxl_decoder_alloc, CXL); * is an endpoint device. A more awkward example is a hostbridge whose root * ports get hot added (technically possible, though unlikely). * - * Context: Process context. Takes and releases the cxld's device lock. + * This is the locked variant of cxl_decoder_add(). + * + * Context: Process context. Expects the cxld's device lock to be held. * * Return: Negative error code if the decoder wasn't properly configured; else * returns 0. */ -int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map) +int cxl_decoder_add_locked(struct cxl_decoder *cxld, int *target_map) { struct cxl_port *port; struct device *dev; @@ -619,6 +615,39 @@ int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map) return device_add(dev); } +EXPORT_SYMBOL_NS_GPL(cxl_decoder_add_locked, CXL); + +/** + * cxl_decoder_add - Add a decoder with targets + * @cxld: The cxl decoder allocated by cxl_decoder_alloc() + * @target_map: A list of downstream ports that this decoder can direct memory + * traffic to. These numbers should correspond with the port number + * in the PCIe Link Capabilities structure. + * + * This is the unlocked variant of cxl_decoder_add_locked(). + * See cxl_decoder_add_locked(). + * + * Context: Process context. Takes and releases the cxld's device lock. + */ +int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map) +{ + struct cxl_port *port; + int rc; + + if (WARN_ON_ONCE(!cxld)) + return -EINVAL; + + if (WARN_ON_ONCE(IS_ERR(cxld))) + return PTR_ERR(cxld); + + port = to_cxl_port(cxld->dev.parent); + + device_lock(&port->dev); + rc = cxl_decoder_add_locked(cxld, target_map); + device_unlock(&port->dev); + + return rc; +} EXPORT_SYMBOL_NS_GPL(cxl_decoder_add, CXL); static void cxld_unregister(void *dev) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index b66ed8f241c6..2c5627fa8a34 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -290,6 +290,7 @@ struct cxl_decoder *to_cxl_decoder(struct device *dev); bool is_root_decoder(struct device *dev); struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, unsigned int nr_targets); +int cxl_decoder_add_locked(struct cxl_decoder *cxld, int *target_map); int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map); int cxl_decoder_autoremove(struct device *host, struct cxl_decoder *cxld); From patchwork Sat Nov 20 00:02:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557475 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz942jvz9sS8 for ; Sat, 20 Nov 2021 11:03:17 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236376AbhKTAGS (ORCPT ); Fri, 19 Nov 2021 19:06:18 -0500 Received: from mga12.intel.com ([192.55.52.136]:5726 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234759AbhKTAGK (ORCPT ); Fri, 19 Nov 2021 19:06:10 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542411" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542411" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:59 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088386" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:58 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 14/23] cxl: Introduce topology host registration Date: Fri, 19 Nov 2021 16:02:41 -0800 Message-Id: <20211120000250.1663391-15-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The description of the CXL topology will be conveyed by a platform specific entity that is expected to be a singleton. For ACPI based systems, this is ACPI0017. When the topology host goes away, which as of now can only be triggered by module unload, it is desirable to have the entire topology cleaned up. Regular devm unwinding handles most situations already, but what's missing is the ability to teardown the root port. Since the root port is platform specific, the core needs a set of APIs to allow platform specific drivers to register their root ports. With that, all the automatic teardown can occur. cxl_test makes for an interesting case. cxl_test creates an alternate universe where there are possibly two root topology hosts (a real ACPI0016, and a fake ACPI0016). For this to work in the future, cxl_acpi (or some future platform host driver) will need to be unloaded first. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron Reported-by: kernel test robot Reported-by: Dan Carpenter --- The topology lock can be used for more things. I'd like to save that as an add-on patch since it's extra risk for no reward, at this point. --- drivers/cxl/acpi.c | 18 ++++++++++--- drivers/cxl/core/bus.c | 57 +++++++++++++++++++++++++++++++++++++++--- drivers/cxl/cxl.h | 5 +++- 3 files changed, 73 insertions(+), 7 deletions(-) diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index 3415184a2e61..82cc42ab38c6 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -224,8 +224,7 @@ static int add_host_bridge_uport(struct device *match, void *arg) return 0; } - port = devm_cxl_add_port(host, match, dport->component_reg_phys, - root_port); + port = devm_cxl_add_port(match, dport->component_reg_phys, root_port); if (IS_ERR(port)) return PTR_ERR(port); dev_dbg(host, "%s: add: %s\n", dev_name(match), dev_name(&port->dev)); @@ -376,6 +375,11 @@ static int add_root_nvdimm_bridge(struct device *match, void *data) return 1; } +static void clear_topology_host(void *data) +{ + cxl_unregister_topology_host(data); +} + static int cxl_acpi_probe(struct platform_device *pdev) { int rc; @@ -384,7 +388,15 @@ static int cxl_acpi_probe(struct platform_device *pdev) struct acpi_device *adev = ACPI_COMPANION(host); struct cxl_cfmws_context ctx; - root_port = devm_cxl_add_port(host, host, CXL_RESOURCE_NONE, NULL); + rc = cxl_register_topology_host(host); + if (rc) + return rc; + + rc = devm_add_action_or_reset(host, clear_topology_host, host); + if (rc) + return rc; + + root_port = devm_cxl_add_port(host, CXL_RESOURCE_NONE, root_port); if (IS_ERR(root_port)) return PTR_ERR(root_port); dev_dbg(host, "add: %s\n", dev_name(&root_port->dev)); diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index cd6fe7823c69..2ad38167796d 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -25,6 +25,53 @@ */ static DEFINE_IDA(cxl_port_ida); +static DECLARE_RWSEM(topology_host_sem); + +static struct device *cxl_topology_host; + +int cxl_register_topology_host(struct device *host) +{ + down_write(&topology_host_sem); + if (cxl_topology_host) { + up_write(&topology_host_sem); + pr_warn("%s host currently in use. Please try unloading %s", + dev_name(cxl_topology_host), host->driver->name); + return -EBUSY; + } + + cxl_topology_host = host; + up_write(&topology_host_sem); + + return 0; +} +EXPORT_SYMBOL_NS_GPL(cxl_register_topology_host, CXL); + +void cxl_unregister_topology_host(struct device *host) +{ + down_write(&topology_host_sem); + if (cxl_topology_host == host) + cxl_topology_host = NULL; + else + pr_warn("topology host in use by %s\n", + cxl_topology_host->driver->name); + up_write(&topology_host_sem); +} +EXPORT_SYMBOL_NS_GPL(cxl_unregister_topology_host, CXL); + +static struct device *get_cxl_topology_host(void) +{ + down_read(&topology_host_sem); + if (cxl_topology_host) + return cxl_topology_host; + up_read(&topology_host_sem); + return NULL; +} + +static void put_cxl_topology_host(struct device *dev) +{ + WARN_ON(dev != cxl_topology_host); + up_read(&topology_host_sem); +} static ssize_t devtype_show(struct device *dev, struct device_attribute *attr, char *buf) @@ -362,17 +409,16 @@ static struct cxl_port *cxl_port_alloc(struct device *uport, /** * devm_cxl_add_port - register a cxl_port in CXL memory decode hierarchy - * @host: host device for devm operations * @uport: "physical" device implementing this upstream port * @component_reg_phys: (optional) for configurable cxl_port instances * @parent_port: next hop up in the CXL memory decode hierarchy */ -struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport, +struct cxl_port *devm_cxl_add_port(struct device *uport, resource_size_t component_reg_phys, struct cxl_port *parent_port) { + struct device *dev, *host; struct cxl_port *port; - struct device *dev; int rc; port = cxl_port_alloc(uport, component_reg_phys, parent_port); @@ -391,7 +437,12 @@ struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport, if (rc) goto err; + host = get_cxl_topology_host(); + if (!host) + return ERR_PTR(-ENODEV); + rc = devm_add_action_or_reset(host, unregister_port, port); + put_cxl_topology_host(host); if (rc) return ERR_PTR(rc); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 2c5627fa8a34..6fac4826d22b 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -152,6 +152,9 @@ int cxl_find_regblock(struct pci_dev *pdev, enum cxl_regloc_type type, #define CXL_RESOURCE_NONE ((resource_size_t) -1) #define CXL_TARGET_STRLEN 20 +int cxl_register_topology_host(struct device *host); +void cxl_unregister_topology_host(struct device *host); + /* * cxl_decoder flags that define the type of memory / devices this * decoder supports as well as configuration lock status See "CXL 2.0 @@ -279,7 +282,7 @@ struct cxl_dport { }; struct cxl_port *to_cxl_port(struct device *dev); -struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport, +struct cxl_port *devm_cxl_add_port(struct device *uport, resource_size_t component_reg_phys, struct cxl_port *parent_port); From patchwork Sat Nov 20 00:02:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557476 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HwtzB1rltz9sS8 for ; Sat, 20 Nov 2021 11:03:18 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236401AbhKTAGS (ORCPT ); Fri, 19 Nov 2021 19:06:18 -0500 Received: from mga12.intel.com ([192.55.52.136]:5724 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236042AbhKTAGM (ORCPT ); Fri, 19 Nov 2021 19:06:12 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542412" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542412" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:59 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088389" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:59 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 15/23] cxl/core: Store global list of root ports Date: Fri, 19 Nov 2021 16:02:42 -0800 Message-Id: <20211120000250.1663391-16-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org CXL root ports (the downstream port to a host bridge) are to be enumerated by a platform specific driver. In the case of ACPI compliant systems, this is like the cxl_acpi driver. Root ports are the first CXL spec defined component that can be "found" by that platform specific driver. By storing a list of these root ports components in lower levels of the topology (switches and endpoints), have a mechanism to walk up their device hierarchy to find an enumerated root port. This will be necessary for region programming. Signed-off-by: Ben Widawsky --- Dan points out in review this is possible to do without a new global list. While I agree, I was unable to get it working in a reasonable mount of time. Will punt on that for now. --- drivers/cxl/acpi.c | 4 ++-- drivers/cxl/core/bus.c | 34 +++++++++++++++++++++++++++++++++- drivers/cxl/cxl.h | 5 ++++- tools/testing/cxl/mock_acpi.c | 4 ++-- 4 files changed, 41 insertions(+), 6 deletions(-) diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index 82cc42ab38c6..c12e4fed7941 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -159,7 +159,7 @@ __mock int match_add_root_ports(struct pci_dev *pdev, void *data) creg = cxl_reg_block(pdev, &map); port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap); - rc = cxl_add_dport(port, &pdev->dev, port_num, creg); + rc = cxl_add_dport(port, &pdev->dev, port_num, creg, true); if (rc) { ctx->error = rc; return rc; @@ -341,7 +341,7 @@ static int add_host_bridge_dport(struct device *match, void *arg) return 0; } - rc = cxl_add_dport(root_port, match, uid, ctx.chbcr); + rc = cxl_add_dport(root_port, match, uid, ctx.chbcr, false); if (rc) { dev_err(host, "failed to add downstream port: %s\n", dev_name(match)); diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 2ad38167796d..9e0d7d5d9298 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -26,6 +26,8 @@ static DEFINE_IDA(cxl_port_ida); static DECLARE_RWSEM(topology_host_sem); +static LIST_HEAD(cxl_root_ports); +static DECLARE_RWSEM(root_port_sem); static struct device *cxl_topology_host; @@ -326,12 +328,31 @@ struct cxl_port *to_cxl_port(struct device *dev) return container_of(dev, struct cxl_port, dev); } +struct cxl_dport *cxl_get_root_dport(struct device *dev) +{ + struct cxl_dport *ret = NULL; + struct cxl_dport *dport; + + down_read(&root_port_sem); + list_for_each_entry(dport, &cxl_root_ports, root_port_link) { + if (dport->dport == dev) { + ret = dport; + break; + } + } + + up_read(&root_port_sem); + return ret; +} +EXPORT_SYMBOL_NS_GPL(cxl_get_root_dport, CXL); + static void unregister_port(void *_port) { struct cxl_port *port = _port; struct cxl_dport *dport; device_lock(&port->dev); + down_read(&root_port_sem); list_for_each_entry(dport, &port->dports, list) { char link_name[CXL_TARGET_STRLEN]; @@ -339,7 +360,10 @@ static void unregister_port(void *_port) dport->port_id) >= CXL_TARGET_STRLEN) continue; sysfs_remove_link(&port->dev.kobj, link_name); + + list_del_init(&dport->root_port_link); } + up_read(&root_port_sem); device_unlock(&port->dev); device_unregister(&port->dev); } @@ -493,12 +517,13 @@ static int add_dport(struct cxl_port *port, struct cxl_dport *new) * @dport_dev: firmware or PCI device representing the dport * @port_id: identifier for this dport in a decoder's target list * @component_reg_phys: optional location of CXL component registers + * @root_port: is this a root port (hostbridge downstream) * * Note that all allocations and links are undone by cxl_port deletion * and release. */ int cxl_add_dport(struct cxl_port *port, struct device *dport_dev, int port_id, - resource_size_t component_reg_phys) + resource_size_t component_reg_phys, bool root_port) { char link_name[CXL_TARGET_STRLEN]; struct cxl_dport *dport; @@ -513,6 +538,7 @@ int cxl_add_dport(struct cxl_port *port, struct device *dport_dev, int port_id, return -ENOMEM; INIT_LIST_HEAD(&dport->list); + INIT_LIST_HEAD(&dport->root_port_link); dport->dport = get_device(dport_dev); dport->port_id = port_id; dport->component_reg_phys = component_reg_phys; @@ -526,6 +552,12 @@ int cxl_add_dport(struct cxl_port *port, struct device *dport_dev, int port_id, if (rc) goto err; + if (root_port) { + down_write(&root_port_sem); + list_add_tail(&dport->root_port_link, &cxl_root_ports); + up_write(&root_port_sem); + } + return 0; err: cxl_dport_release(dport); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 6fac4826d22b..3962a5e6a950 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -272,6 +272,7 @@ struct cxl_port { * @component_reg_phys: downstream port component registers * @port: reference to cxl_port that contains this downstream port * @list: node for a cxl_port's list of cxl_dport instances + * @root_port_link: node for global list of root ports */ struct cxl_dport { struct device *dport; @@ -279,6 +280,7 @@ struct cxl_dport { resource_size_t component_reg_phys; struct cxl_port *port; struct list_head list; + struct list_head root_port_link; }; struct cxl_port *to_cxl_port(struct device *dev); @@ -287,7 +289,8 @@ struct cxl_port *devm_cxl_add_port(struct device *uport, struct cxl_port *parent_port); int cxl_add_dport(struct cxl_port *port, struct device *dport, int port_id, - resource_size_t component_reg_phys); + resource_size_t component_reg_phys, bool root_port); +struct cxl_dport *cxl_get_root_dport(struct device *dev); struct cxl_decoder *to_cxl_decoder(struct device *dev); bool is_root_decoder(struct device *dev); diff --git a/tools/testing/cxl/mock_acpi.c b/tools/testing/cxl/mock_acpi.c index 4c8a493ace56..ddefc4345f36 100644 --- a/tools/testing/cxl/mock_acpi.c +++ b/tools/testing/cxl/mock_acpi.c @@ -57,7 +57,7 @@ static int match_add_root_port(struct pci_dev *pdev, void *data) /* TODO walk DVSEC to find component register base */ port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap); - rc = cxl_add_dport(port, &pdev->dev, port_num, CXL_RESOURCE_NONE); + rc = cxl_add_dport(port, &pdev->dev, port_num, CXL_RESOURCE_NONE, true); if (rc) { dev_err(dev, "failed to add dport: %s (%d)\n", dev_name(&pdev->dev), rc); @@ -78,7 +78,7 @@ static int mock_add_root_port(struct platform_device *pdev, void *data) struct device *dev = ctx->dev; int rc; - rc = cxl_add_dport(port, &pdev->dev, pdev->id, CXL_RESOURCE_NONE); + rc = cxl_add_dport(port, &pdev->dev, pdev->id, CXL_RESOURCE_NONE, true); if (rc) { dev_err(dev, "failed to add dport: %s (%d)\n", dev_name(&pdev->dev), rc); From patchwork Sat Nov 20 00:02:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557477 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HwtzB4C1Gz9t0G for ; Sat, 20 Nov 2021 11:03:18 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236042AbhKTAGT (ORCPT ); Fri, 19 Nov 2021 19:06:19 -0500 Received: from mga12.intel.com ([192.55.52.136]:5726 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236143AbhKTAGM (ORCPT ); Fri, 19 Nov 2021 19:06:12 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542413" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542413" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:59 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088392" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:59 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 16/23] cxl/pci: Cache device DVSEC offset Date: Fri, 19 Nov 2021 16:02:43 -0800 Message-Id: <20211120000250.1663391-17-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The PCIe device DVSEC, defined in the CXL 2.0 spec, 8.1.3 is required to be implemented by CXL 2.0 endpoint devices. Since the information contained within this DVSEC will be critically important for region configuration, it makes sense to find the value early. Since this DVSEC is not strictly required for mailbox functionality, failure to find this information does not result in the driver failing to bind. Signed-off-by: Ben Widawsky --- drivers/cxl/cxlmem.h | 2 ++ drivers/cxl/pci.c | 7 +++++++ 2 files changed, 9 insertions(+) diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 8d96d009ad90..3ef3c652599e 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -98,6 +98,7 @@ struct cxl_mbox_cmd { * * @dev: The device associated with this CXL state * @regs: Parsed register blocks + * @device_dvsec: Offset to the PCIe device DVSEC * @payload_size: Size of space for payload * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) * @lsa_size: Size of Label Storage Area @@ -125,6 +126,7 @@ struct cxl_dev_state { struct device *dev; struct cxl_regs regs; + int device_dvsec; size_t payload_size; size_t lsa_size; diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index d2c743a31b0c..f3872cbee7f8 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -474,6 +474,13 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (IS_ERR(cxlds)) return PTR_ERR(cxlds); + cxlds->device_dvsec = pci_find_dvsec_capability(pdev, + PCI_DVSEC_VENDOR_ID_CXL, + CXL_DVSEC_PCIE_DEVICE); + if (!cxlds->device_dvsec) + dev_warn(&pdev->dev, + "Device DVSEC not present. Expect limited functionality.\n"); + rc = cxl_setup_regs(pdev, CXL_REGLOC_RBI_MEMDEV, &map); if (rc) return rc; From patchwork Sat Nov 20 00:02:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557474 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4Hwtz91QbNz9t0G for ; Sat, 20 Nov 2021 11:03:17 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236300AbhKTAGR (ORCPT ); Fri, 19 Nov 2021 19:06:17 -0500 Received: from mga12.intel.com ([192.55.52.136]:5719 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234439AbhKTAGK (ORCPT ); Fri, 19 Nov 2021 19:06:10 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542414" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542414" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:59 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088395" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:59 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 17/23] cxl: Cache and pass DVSEC ranges Date: Fri, 19 Nov 2021 16:02:44 -0800 Message-Id: <20211120000250.1663391-18-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org CXL 1.1 specification provided a mechanism for mapping an address space of a CXL device. That functionality is known as a "range" and can be programmed through PCIe DVSEC. In addition to this, the specification defines an active bit which a device will expose through the same DVSEC to notify system software that memory is initialized and ready. While CXL 2.0 introduces a more powerful mechanism called HDM decoders that are controlled by MMIO behind a PCIe BAR, the spec does allow the 1.1 style mapping to still be present. In such a case, when the CXL driver takes over, if it were to enable HDM decoding and there was an actively used range, things would likely blow up, in particular if it wasn't an identical mapping. This patch caches the relevant information which the cxl_mem driver will need to make the proper decision and passes it along. Signed-off-by: Ben Widawsky Reported-by: kernel test robot --- drivers/cxl/cxlmem.h | 19 +++++++ drivers/cxl/pci.c | 126 +++++++++++++++++++++++++++++++++++++++++++ drivers/cxl/pci.h | 13 +++++ 3 files changed, 158 insertions(+) diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 3ef3c652599e..eac5528ccaae 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -89,6 +89,22 @@ struct cxl_mbox_cmd { */ #define CXL_CAPACITY_MULTIPLIER SZ_256M +/** + * struct cxl_endpoint_dvsec_info - Cached DVSEC info + * @mem_enabled: cached value of mem_enabled in the DVSEC, PCIE_DEVICE + * @ranges: Number of HDM ranges this device contains. + * @range.base: cached value of the range base in the DVSEC, PCIE_DEVICE + * @range.size: cached value of the range size in the DVSEC, PCIE_DEVICE + */ +struct cxl_endpoint_dvsec_info { + bool mem_enabled; + int ranges; + struct { + u64 base; + u64 size; + } range[2]; +}; + /** * struct cxl_dev_state - The driver device state * @@ -117,6 +133,7 @@ struct cxl_mbox_cmd { * @active_persistent_bytes: sum of hard + soft persistent * @next_volatile_bytes: volatile capacity change pending device reset * @next_persistent_bytes: persistent capacity change pending device reset + * @info: Cached DVSEC information about the device. * @mbox_send: @dev specific transport for transmitting mailbox commands * * See section 8.2.9.5.2 Capacity Configuration and Label Storage for @@ -147,6 +164,8 @@ struct cxl_dev_state { u64 next_volatile_bytes; u64 next_persistent_bytes; + struct cxl_endpoint_dvsec_info *info; + int (*mbox_send)(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd); }; diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index f3872cbee7f8..b3f46045bf3e 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -452,8 +452,126 @@ static int cxl_setup_regs(struct pci_dev *pdev, enum cxl_regloc_type type, return rc; } +#define CDPD(cxlds, which) \ + cxlds->device_dvsec + CXL_DVSEC_PCIE_DEVICE_##which##_OFFSET + +#define CDPDR(cxlds, which, sorb, lohi) \ + cxlds->device_dvsec + \ + CXL_DVSEC_PCIE_DEVICE_RANGE_##sorb##_##lohi##_OFFSET(which) + +static int wait_for_valid(struct cxl_dev_state *cxlds) +{ + struct pci_dev *pdev = to_pci_dev(cxlds->dev); + const unsigned long timeout = jiffies + HZ; + bool valid; + + do { + u64 size; + u32 temp; + int rc; + + rc = pci_read_config_dword(pdev, CDPDR(cxlds, 0, SIZE, HIGH), + &temp); + if (rc) + return -ENXIO; + size = (u64)temp << 32; + + rc = pci_read_config_dword(pdev, CDPDR(cxlds, 0, SIZE, LOW), + &temp); + if (rc) + return -ENXIO; + size |= temp & CXL_DVSEC_PCIE_DEVICE_MEM_SIZE_LOW_MASK; + + /* + * Memory_Info_Valid: When set, indicates that the CXL Range 1 + * Size high and Size Low registers are valid. Must be set + * within 1 second of deassertion of reset to CXL device. + */ + valid = FIELD_GET(CXL_DVSEC_PCIE_DEVICE_MEM_INFO_VALID, temp); + if (valid) + break; + cpu_relax(); + } while (!time_after(jiffies, timeout)); + + return valid ? 0 : -ETIMEDOUT; +} + +static struct cxl_endpoint_dvsec_info *dvsec_ranges(struct cxl_dev_state *cxlds) +{ + struct pci_dev *pdev = to_pci_dev(cxlds->dev); + struct cxl_endpoint_dvsec_info *info; + int hdm_count, rc, i; + u16 cap, ctrl; + + rc = pci_read_config_word(pdev, CDPD(cxlds, CAP), &cap); + if (rc) + return ERR_PTR(-ENXIO); + rc = pci_read_config_word(pdev, CDPD(cxlds, CTRL), &ctrl); + if (rc) + return ERR_PTR(-ENXIO); + + if (!(cap & CXL_DVSEC_PCIE_DEVICE_MEM_CAPABLE)) + return ERR_PTR(-ENODEV); + + /* + * It is not allowed by spec for MEM.capable to be set and have 0 HDM + * decoders. Therefore, the device is not considered CXL.mem capable. + */ + hdm_count = FIELD_GET(CXL_DVSEC_PCIE_DEVICE_HDM_COUNT_MASK, cap); + if (!hdm_count || hdm_count > 2) + return ERR_PTR(-EINVAL); + + rc = wait_for_valid(cxlds); + if (rc) + return ERR_PTR(rc); + + info = devm_kzalloc(cxlds->dev, sizeof(*info), GFP_KERNEL); + if (!info) + return ERR_PTR(-ENOMEM); + + info->mem_enabled = FIELD_GET(CXL_DVSEC_PCIE_DEVICE_MEM_ENABLE, ctrl); + + for (i = 0; i < hdm_count; i++) { + u64 base, size; + u32 temp; + + rc = pci_read_config_dword(pdev, CDPDR(cxlds, i, SIZE, HIGH), + &temp); + if (rc) + continue; + size = (u64)temp << 32; + + rc = pci_read_config_dword(pdev, CDPDR(cxlds, i, SIZE, LOW), + &temp); + if (rc) + continue; + size |= temp & CXL_DVSEC_PCIE_DEVICE_MEM_SIZE_LOW_MASK; + + rc = pci_read_config_dword(pdev, CDPDR(cxlds, i, BASE, HIGH), + &temp); + if (rc) + continue; + base = (u64)temp << 32; + + rc = pci_read_config_dword(pdev, CDPDR(cxlds, i, BASE, LOW), + &temp); + if (rc) + continue; + base |= temp & CXL_DVSEC_PCIE_DEVICE_MEM_BASE_LOW_MASK; + + info->range[i].base = base; + info->range[i].size = size; + info->ranges++; + } + + return info; +} + +#undef CDPDR + static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { + struct cxl_endpoint_dvsec_info *info; struct cxl_register_map map; struct cxl_memdev *cxlmd; struct cxl_dev_state *cxlds; @@ -505,6 +623,14 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; + info = dvsec_ranges(cxlds); + if (IS_ERR(info)) + dev_err(&pdev->dev, + "Failed to get DVSEC range information (%ld)\n", + PTR_ERR(info)); + else + cxlds->info = info; + cxlmd = devm_cxl_add_memdev(cxlds); if (IS_ERR(cxlmd)) return PTR_ERR(cxlmd); diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h index a4b506bb37d1..2a48cd65bf59 100644 --- a/drivers/cxl/pci.h +++ b/drivers/cxl/pci.h @@ -15,6 +15,19 @@ /* CXL 2.0 8.1.3: PCIe DVSEC for CXL Device */ #define CXL_DVSEC_PCIE_DEVICE 0 +#define CXL_DVSEC_PCIE_DEVICE_CAP_OFFSET 0xA +#define CXL_DVSEC_PCIE_DEVICE_MEM_CAPABLE BIT(2) +#define CXL_DVSEC_PCIE_DEVICE_HDM_COUNT_MASK GENMASK(5, 4) +#define CXL_DVSEC_PCIE_DEVICE_CTRL_OFFSET 0xC +#define CXL_DVSEC_PCIE_DEVICE_MEM_ENABLE BIT(2) +#define CXL_DVSEC_PCIE_DEVICE_RANGE_SIZE_HIGH_OFFSET(i) 0x18 + (i * 0x10) +#define CXL_DVSEC_PCIE_DEVICE_RANGE_SIZE_LOW_OFFSET(i) 0x1C + (i * 0x10) +#define CXL_DVSEC_PCIE_DEVICE_MEM_INFO_VALID BIT(0) +#define CXL_DVSEC_PCIE_DEVICE_MEM_ACTIVE BIT(1) +#define CXL_DVSEC_PCIE_DEVICE_MEM_SIZE_LOW_MASK GENMASK(31, 28) +#define CXL_DVSEC_PCIE_DEVICE_RANGE_BASE_HIGH_OFFSET(i) 0x20 + (i * 0x10) +#define CXL_DVSEC_PCIE_DEVICE_RANGE_BASE_LOW_OFFSET(i) 0x24 + (i * 0x10) +#define CXL_DVSEC_PCIE_DEVICE_MEM_BASE_LOW_MASK GENMASK(31, 28) /* CXL 2.0 8.1.4: Non-CXL Function Map DVSEC */ #define CXL_DVSEC_FUNCTION_MAP 2 From patchwork Sat Nov 20 00:02:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557479 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HwtzF6gBsz9t0G for ; Sat, 20 Nov 2021 11:03:21 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236145AbhKTAGU (ORCPT ); Fri, 19 Nov 2021 19:06:20 -0500 Received: from mga12.intel.com ([192.55.52.136]:5719 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236282AbhKTAGM (ORCPT ); Fri, 19 Nov 2021 19:06:12 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542415" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542415" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:03:00 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088398" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:02:59 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 18/23] cxl/pci: Implement wait for media active Date: Fri, 19 Nov 2021 16:02:45 -0800 Message-Id: <20211120000250.1663391-19-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org CXL 2.0 8.1.3.8.2 defines "Memory_Active: When set, indicates that the CXL Range 1 memory is fully initialized and available for software use. Must be set within Range 1. Memory_Active_Timeout of deassertion of reset to CXL device if CXL.mem HwInit Mode=1" The CXL* Type 3 Memory Device Software Guide (Revision 1.0) further describes the need to check this bit before using HDM. Unfortunately, Memory_Active can take quite a long time depending on media size (up to 256s per 2.0 spec). Since the cxl_pci driver doesn't care about this, a callback is exported as part of driver state for use by drivers that do care. Signed-off-by: Ben Widawsky --- This patch did not exist in RFCv2 --- drivers/cxl/cxlmem.h | 1 + drivers/cxl/pci.c | 56 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index eac5528ccaae..a9424dd4e5c3 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -167,6 +167,7 @@ struct cxl_dev_state { struct cxl_endpoint_dvsec_info *info; int (*mbox_send)(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd); + int (*wait_media_ready)(struct cxl_dev_state *cxlds); }; enum cxl_opcode { diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index b3f46045bf3e..f1a68bfe5f77 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -496,6 +496,60 @@ static int wait_for_valid(struct cxl_dev_state *cxlds) return valid ? 0 : -ETIMEDOUT; } +/* + * Implements Figure 43 of the CXL Type 3 Memory Device Software Guide. Waits a + * full 256s no matter what the device reports. + */ +static int wait_for_media_ready(struct cxl_dev_state *cxlds) +{ + const unsigned long timeout = jiffies + (256 * HZ); + struct pci_dev *pdev = to_pci_dev(cxlds->dev); + u64 md_status; + bool active; + int rc; + + rc = wait_for_valid(cxlds); + if (rc) + return rc; + + do { + u64 size; + u32 temp; + int rc; + + rc = pci_read_config_dword(pdev, CDPDR(cxlds, 0, SIZE, HIGH), + &temp); + if (rc) + return -ENXIO; + size = (u64)temp << 32; + + rc = pci_read_config_dword(pdev, CDPDR(cxlds, 0, SIZE, LOW), + &temp); + if (rc) + return -ENXIO; + size |= temp & CXL_DVSEC_PCIE_DEVICE_MEM_SIZE_LOW_MASK; + + active = FIELD_GET(CXL_DVSEC_PCIE_DEVICE_MEM_ACTIVE, temp); + if (active) + break; + cpu_relax(); + mdelay(100); + } while (!time_after(jiffies, timeout)); + + if (!active) + return -ETIMEDOUT; + + rc = check_device_status(cxlds); + if (rc) + return rc; + + md_status = readq(cxlds->regs.memdev + CXLMDEV_STATUS_OFFSET); + if (!CXLMDEV_READY(md_status)) + return -EIO; + + return 0; +} + static struct cxl_endpoint_dvsec_info *dvsec_ranges(struct cxl_dev_state *cxlds) { struct pci_dev *pdev = to_pci_dev(cxlds->dev); @@ -598,6 +652,8 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (!cxlds->device_dvsec) dev_warn(&pdev->dev, "Device DVSEC not present. Expect limited functionality.\n"); + else + cxlds->wait_media_ready = wait_for_media_ready; rc = cxl_setup_regs(pdev, CXL_REGLOC_RBI_MEMDEV, &map); if (rc) From patchwork Sat Nov 20 00:02:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557478 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HwtzC4n9Xz9sS8 for ; Sat, 20 Nov 2021 11:03:19 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236358AbhKTAGT (ORCPT ); Fri, 19 Nov 2021 19:06:19 -0500 Received: from mga12.intel.com ([192.55.52.136]:5719 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236145AbhKTAGM (ORCPT ); Fri, 19 Nov 2021 19:06:12 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542416" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542416" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:03:00 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088401" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:03:00 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 19/23] cxl/pci: Store component register base in cxlds Date: Fri, 19 Nov 2021 16:02:46 -0800 Message-Id: <20211120000250.1663391-20-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The component register base address is enumerated and stored in the cxl_device_state structure for use by the endpoint driver. Component register base addresses are obtained through PCI mechanisms. As such it makes most sense for the cxl_pci driver to obtain that address. In order to reuse the port driver for enumerating decoder resources for an endpoint, it is desirable to be able to add the endpoint as a port. The endpoint does know of the cxlds and can pull the component register base from there and pass it along to port creation. Signed-off-by: Ben Widawsky Reported-by: kernel test robot --- Changes since RFCv2: This patch was originally named, "cxl/core: Store component register base for memdevs". It plumbed the component registers through memdev creation. After more work, it became apparent we needed to plumb other stuff from the pci driver, so going forward, cxlds will just be referenced by the cxl_mem driver. This also allows us to ignore the change needed to cxl_test - Rework patch to store the base in cxlds - Remove defunct comment (Dan) --- drivers/cxl/cxlmem.h | 2 ++ drivers/cxl/pci.c | 11 +++++++++++ 2 files changed, 13 insertions(+) diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index a9424dd4e5c3..b1d753541f4e 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -134,6 +134,7 @@ struct cxl_endpoint_dvsec_info { * @next_volatile_bytes: volatile capacity change pending device reset * @next_persistent_bytes: persistent capacity change pending device reset * @info: Cached DVSEC information about the device. + * @component_reg_phys: register base of component registers * @mbox_send: @dev specific transport for transmitting mailbox commands * * See section 8.2.9.5.2 Capacity Configuration and Label Storage for @@ -165,6 +166,7 @@ struct cxl_dev_state { u64 next_persistent_bytes; struct cxl_endpoint_dvsec_info *info; + resource_size_t component_reg_phys; int (*mbox_send)(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd); int (*wait_media_ready)(struct cxl_dev_state *cxlds); diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index f1a68bfe5f77..a8e375950514 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -663,6 +663,17 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; + /* + * If the component registers can't be found, the cxl_pci driver may + * still be useful for management functions so don't return an error. + */ + cxlds->component_reg_phys = CXL_RESOURCE_NONE; + rc = cxl_setup_regs(pdev, CXL_REGLOC_RBI_COMPONENT, &map); + if (rc) + dev_warn(&cxlmd->dev, "No component registers (%d)\n", rc); + else + cxlds->component_reg_phys = cxl_reg_block(pdev, &map); + rc = cxl_pci_setup_mailbox(cxlds); if (rc) return rc; From patchwork Sat Nov 20 00:02:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557483 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HwtzH2bnqz9t0G for ; Sat, 20 Nov 2021 11:03:23 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236400AbhKTAGX (ORCPT ); Fri, 19 Nov 2021 19:06:23 -0500 Received: from mga12.intel.com ([192.55.52.136]:5719 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236325AbhKTAGN (ORCPT ); Fri, 19 Nov 2021 19:06:13 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542417" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542417" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:03:00 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088406" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:03:00 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 20/23] cxl/port: Introduce a port driver Date: Fri, 19 Nov 2021 16:02:47 -0800 Message-Id: <20211120000250.1663391-21-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The CXL port driver is responsible for managing the decoder resources contained within the port. It will also provide APIs that other drivers will consume for managing these resources. There are 4 types of ports in a system: 1. Platform port. This is a non-programmable entity. Such a port is named rootX. It is enumerated by cxl_acpi in an ACPI based system. 2. Hostbridge port. This ports register access is defined in a platform specific way (CHBS for ACPI platforms). It has n downstream ports, each of which are known as CXL 2.0 root ports. Once the platform specific mechanism to get the offset to the registers is obtained it operates just like other CXL components. The enumeration of this component is started by cxl_acpi and completed by cxl_port. 3. Switch port. A switch port is similar to a hostbridge port except register access is defined in the CXL specification in a platform agnostic way. The downstream ports for a switch are simply known as downstream ports. The enumeration of these are entirely contained within cxl_port. 4. Endpoint port. Endpoint ports are similar to switch ports with the exception that they have no downstream ports, only the underlying media on the device. The enumeration of these are started by cxl_pci, and completed by cxl_port. Signed-off-by: Ben Widawsky Reported-by: kernel test robot Reported-by: kernel test robot --- Changes since RFCv2: - Reword commit message tense (Dan) - Reword commit message - Drop SOFTDEP since it's not needed yet (Dan) - Add CONFIG_CXL_PORT (Dan) - s/CXL_DECODER_F_EN/CXL_DECODER_F_ENABLE (Dan) - rename cxl_hdm_decoder_ functions to "to_" (Dan) - remove useless inline (Dan) - Check endpoint decoder based on dport list instead of driver id (Dan) - Use range instead of resource per dependent patch change - Use clever union packing for target list (Dan) - Only check NULL from devm_cxl_iomap_block (Dan) - Properly parent the created cxl decoders - Move bus rescanning from cxl_acpi to here (Dan) - Remove references to "CFMWS" in core (Dan) - Use macro to help keep within 80 character lines --- .../driver-api/cxl/memory-devices.rst | 5 + drivers/cxl/Kconfig | 22 ++ drivers/cxl/Makefile | 2 + drivers/cxl/core/bus.c | 67 ++++ drivers/cxl/core/regs.c | 6 +- drivers/cxl/cxl.h | 34 +- drivers/cxl/port.c | 323 ++++++++++++++++++ 7 files changed, 450 insertions(+), 9 deletions(-) create mode 100644 drivers/cxl/port.c diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst index 3b8f41395f6b..fbf0393cdddc 100644 --- a/Documentation/driver-api/cxl/memory-devices.rst +++ b/Documentation/driver-api/cxl/memory-devices.rst @@ -28,6 +28,11 @@ CXL Memory Device .. kernel-doc:: drivers/cxl/pci.c :internal: +CXL Port +-------- +.. kernel-doc:: drivers/cxl/port.c + :doc: cxl port + CXL Core -------- .. kernel-doc:: drivers/cxl/cxl.h diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index ef05e96f8f97..3aeb33bba5a3 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -77,4 +77,26 @@ config CXL_PMEM provisioning the persistent memory capacity of CXL memory expanders. If unsure say 'm'. + +config CXL_MEM + tristate "CXL.mem: Memory Devices" + select CXL_PORT + depends on CXL_PCI + default CXL_BUS + help + The CXL.mem protocol allows a device to act as a provider of "System + RAM" and/or "Persistent Memory" that is fully coherent as if the + memory was attached to the typical CPU memory controller. This is + known as HDM "Host-managed Device Memory". + + Say 'y/m' to enable a driver that will attach to CXL.mem devices for + memory expansion and control of HDM. See Chapter 9.13 in the CXL 2.0 + specification for a detailed description of HDM. + + If unsure say 'm'. + + +config CXL_PORT + tristate + endif diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile index cf07ae6cea17..56fcac2323cb 100644 --- a/drivers/cxl/Makefile +++ b/drivers/cxl/Makefile @@ -3,7 +3,9 @@ obj-$(CONFIG_CXL_BUS) += core/ obj-$(CONFIG_CXL_PCI) += cxl_pci.o obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o obj-$(CONFIG_CXL_PMEM) += cxl_pmem.o +obj-$(CONFIG_CXL_PORT) += cxl_port.o cxl_pci-y := pci.o cxl_acpi-y := acpi.o cxl_pmem-y := pmem.o +cxl_port-y := port.o diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 9e0d7d5d9298..46a06cfe79bd 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -31,6 +31,8 @@ static DECLARE_RWSEM(root_port_sem); static struct device *cxl_topology_host; +static bool is_cxl_decoder(struct device *dev); + int cxl_register_topology_host(struct device *host) { down_write(&topology_host_sem); @@ -75,6 +77,45 @@ static void put_cxl_topology_host(struct device *dev) up_read(&topology_host_sem); } +static int decoder_match(struct device *dev, void *data) +{ + struct resource *theirs = (struct resource *)data; + struct cxl_decoder *cxld; + + if (!is_cxl_decoder(dev)) + return 0; + + cxld = to_cxl_decoder(dev); + if (theirs->start <= cxld->decoder_range.start && + theirs->end >= cxld->decoder_range.end) + return 1; + + return 0; +} + +static struct cxl_decoder *cxl_find_root_decoder(resource_size_t base, + resource_size_t size) +{ + struct cxl_decoder *cxld = NULL; + struct device *cxldd; + struct device *host; + struct resource res = (struct resource){ + .start = base, + .end = base + size - 1, + }; + + host = get_cxl_topology_host(); + if (!host) + return NULL; + + cxldd = device_find_child(host, &res, decoder_match); + if (cxldd) + cxld = to_cxl_decoder(cxldd); + + put_cxl_topology_host(host); + return cxld; +} + static ssize_t devtype_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -280,6 +321,11 @@ bool is_root_decoder(struct device *dev) } EXPORT_SYMBOL_NS_GPL(is_root_decoder, CXL); +static bool is_cxl_decoder(struct device *dev) +{ + return dev->type->release == cxl_decoder_release; +} + struct cxl_decoder *to_cxl_decoder(struct device *dev) { if (dev_WARN_ONCE(dev, dev->type->release != cxl_decoder_release, @@ -327,6 +373,7 @@ struct cxl_port *to_cxl_port(struct device *dev) return NULL; return container_of(dev, struct cxl_port, dev); } +EXPORT_SYMBOL_NS_GPL(to_cxl_port, CXL); struct cxl_dport *cxl_get_root_dport(struct device *dev) { @@ -735,6 +782,24 @@ EXPORT_SYMBOL_NS_GPL(cxl_decoder_add, CXL); static void cxld_unregister(void *dev) { + struct cxl_decoder *plat_decoder, *cxld = to_cxl_decoder(dev); + resource_size_t base, size; + + if (is_root_decoder(dev)) { + device_unregister(dev); + return; + } + + base = cxld->decoder_range.start; + size = range_len(&cxld->decoder_range); + + if (size) { + plat_decoder = cxl_find_root_decoder(base, size); + if (plat_decoder) + __release_region(&plat_decoder->platform_res, base, + size); + } + device_unregister(dev); } @@ -789,6 +854,8 @@ static int cxl_device_id(struct device *dev) return CXL_DEVICE_NVDIMM_BRIDGE; if (dev->type == &cxl_nvdimm_type) return CXL_DEVICE_NVDIMM; + if (dev->type == &cxl_port_type) + return CXL_DEVICE_PORT; return 0; } diff --git a/drivers/cxl/core/regs.c b/drivers/cxl/core/regs.c index 41a0245867ea..f191b0c995a7 100644 --- a/drivers/cxl/core/regs.c +++ b/drivers/cxl/core/regs.c @@ -159,9 +159,8 @@ void cxl_probe_device_regs(struct device *dev, void __iomem *base, } EXPORT_SYMBOL_NS_GPL(cxl_probe_device_regs, CXL); -static void __iomem *devm_cxl_iomap_block(struct device *dev, - resource_size_t addr, - resource_size_t length) +void __iomem *devm_cxl_iomap_block(struct device *dev, resource_size_t addr, + resource_size_t length) { void __iomem *ret_val; struct resource *res; @@ -180,6 +179,7 @@ static void __iomem *devm_cxl_iomap_block(struct device *dev, return ret_val; } +EXPORT_SYMBOL_NS_GPL(devm_cxl_iomap_block, CXL); int cxl_map_component_regs(struct pci_dev *pdev, struct cxl_component_regs *regs, diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 3962a5e6a950..24fa16157d5e 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -17,6 +17,9 @@ * (port-driver, region-driver, nvdimm object-drivers... etc). */ +/* CXL 2.0 8.2.4 CXL Component Register Layout and Definition */ +#define CXL_COMPONENT_REG_BLOCK_SIZE SZ_64K + /* CXL 2.0 8.2.5 CXL.cache and CXL.mem Registers*/ #define CXL_CM_OFFSET 0x1000 #define CXL_CM_CAP_HDR_OFFSET 0x0 @@ -36,11 +39,22 @@ #define CXL_HDM_DECODER_CAP_OFFSET 0x0 #define CXL_HDM_DECODER_COUNT_MASK GENMASK(3, 0) #define CXL_HDM_DECODER_TARGET_COUNT_MASK GENMASK(7, 4) -#define CXL_HDM_DECODER0_BASE_LOW_OFFSET 0x10 -#define CXL_HDM_DECODER0_BASE_HIGH_OFFSET 0x14 -#define CXL_HDM_DECODER0_SIZE_LOW_OFFSET 0x18 -#define CXL_HDM_DECODER0_SIZE_HIGH_OFFSET 0x1c -#define CXL_HDM_DECODER0_CTRL_OFFSET 0x20 +#define CXL_HDM_DECODER_INTERLEAVE_11_8 BIT(8) +#define CXL_HDM_DECODER_INTERLEAVE_14_12 BIT(9) +#define CXL_HDM_DECODER_CTRL_OFFSET 0x4 +#define CXL_HDM_DECODER_ENABLE BIT(1) +#define CXL_HDM_DECODER0_BASE_LOW_OFFSET(i) (0x20 * (i) + 0x10) +#define CXL_HDM_DECODER0_BASE_HIGH_OFFSET(i) (0x20 * (i) + 0x14) +#define CXL_HDM_DECODER0_SIZE_LOW_OFFSET(i) (0x20 * (i) + 0x18) +#define CXL_HDM_DECODER0_SIZE_HIGH_OFFSET(i) (0x20 * (i) + 0x1c) +#define CXL_HDM_DECODER0_CTRL_OFFSET(i) (0x20 * (i) + 0x20) +#define CXL_HDM_DECODER0_CTRL_IG_MASK GENMASK(3, 0) +#define CXL_HDM_DECODER0_CTRL_IW_MASK GENMASK(7, 4) +#define CXL_HDM_DECODER0_CTRL_COMMIT BIT(9) +#define CXL_HDM_DECODER0_CTRL_COMMITTED BIT(10) +#define CXL_HDM_DECODER0_CTRL_TYPE BIT(12) +#define CXL_HDM_DECODER0_TL_LOW(i) (0x20 * (i) + 0x24) +#define CXL_HDM_DECODER0_TL_HIGH(i) (0x20 * (i) + 0x28) static inline int cxl_hdm_decoder_count(u32 cap_hdr) { @@ -148,6 +162,8 @@ int cxl_map_device_regs(struct pci_dev *pdev, enum cxl_regloc_type; int cxl_find_regblock(struct pci_dev *pdev, enum cxl_regloc_type type, struct cxl_register_map *map); +void __iomem *devm_cxl_iomap_block(struct device *dev, resource_size_t addr, + resource_size_t length); #define CXL_RESOURCE_NONE ((resource_size_t) -1) #define CXL_TARGET_STRLEN 20 @@ -165,7 +181,8 @@ void cxl_unregister_topology_host(struct device *host); #define CXL_DECODER_F_TYPE2 BIT(2) #define CXL_DECODER_F_TYPE3 BIT(3) #define CXL_DECODER_F_LOCK BIT(4) -#define CXL_DECODER_F_MASK GENMASK(4, 0) +#define CXL_DECODER_F_ENABLE BIT(5) +#define CXL_DECODER_F_MASK GENMASK(5, 0) enum cxl_decoder_type { CXL_DECODER_ACCELERATOR = 2, @@ -255,6 +272,8 @@ struct cxl_walk_context { * @dports: cxl_dport instances referenced by decoders * @decoder_ida: allocator for decoder ids * @component_reg_phys: component register capability base address (optional) + * @rescan_work: worker object for bus rescans after port additions + * @data: opaque data with driver specific usage */ struct cxl_port { struct device dev; @@ -263,6 +282,8 @@ struct cxl_port { struct list_head dports; struct ida decoder_ida; resource_size_t component_reg_phys; + struct work_struct rescan_work; + void *data; }; /** @@ -325,6 +346,7 @@ void cxl_driver_unregister(struct cxl_driver *cxl_drv); #define CXL_DEVICE_NVDIMM_BRIDGE 1 #define CXL_DEVICE_NVDIMM 2 +#define CXL_DEVICE_PORT 3 #define MODULE_ALIAS_CXL(type) MODULE_ALIAS("cxl:t" __stringify(type) "*") #define CXL_MODALIAS_FMT "cxl:t%d" diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c new file mode 100644 index 000000000000..3c03131517af --- /dev/null +++ b/drivers/cxl/port.c @@ -0,0 +1,323 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2021 Intel Corporation. All rights reserved. */ +#include +#include +#include + +#include "cxlmem.h" + +/** + * DOC: cxl port + * + * The port driver implements the set of functionality needed to allow full + * decoder enumeration and routing. A CXL port is an abstraction of a CXL + * component that implements some amount of CXL decoding of CXL.mem traffic. + * As of the CXL 2.0 spec, this includes: + * + * .. list-table:: CXL Components w/ Ports + * :widths: 25 25 50 + * :header-rows: 1 + * + * * - component + * - upstream + * - downstream + * * - Hostbridge + * - ACPI0016 + * - root port + * * - Switch + * - Switch Upstream Port + * - Switch Downstream Port + * * - Endpoint + * - Endpoint Port + * - N/A + * + * The primary service this driver provides is enumerating HDM decoders and + * presenting APIs to other drivers to utilize the decoders. + */ + +static struct workqueue_struct *cxl_port_wq; + +struct cxl_port_data { + struct cxl_component_regs regs; + + struct port_caps { + unsigned int count; + unsigned int tc; + unsigned int interleave11_8; + unsigned int interleave14_12; + } caps; +}; + +static inline int to_interleave_granularity(u32 ctrl) +{ + int val = FIELD_GET(CXL_HDM_DECODER0_CTRL_IG_MASK, ctrl); + + return 256 << val; +} + +static inline int to_interleave_ways(u32 ctrl) +{ + int val = FIELD_GET(CXL_HDM_DECODER0_CTRL_IW_MASK, ctrl); + + return 1 << val; +} + +static void get_caps(struct cxl_port *port, struct cxl_port_data *cpd) +{ + void __iomem *hdm_decoder = cpd->regs.hdm_decoder; + struct port_caps *caps = &cpd->caps; + u32 hdm_cap; + + hdm_cap = readl(hdm_decoder + CXL_HDM_DECODER_CAP_OFFSET); + + caps->count = cxl_hdm_decoder_count(hdm_cap); + caps->tc = FIELD_GET(CXL_HDM_DECODER_TARGET_COUNT_MASK, hdm_cap); + caps->interleave11_8 = + FIELD_GET(CXL_HDM_DECODER_INTERLEAVE_11_8, hdm_cap); + caps->interleave14_12 = + FIELD_GET(CXL_HDM_DECODER_INTERLEAVE_14_12, hdm_cap); +} + +static int map_regs(struct cxl_port *port, void __iomem *crb, + struct cxl_port_data *cpd) +{ + struct cxl_register_map map; + struct cxl_component_reg_map *comp_map = &map.component_map; + + cxl_probe_component_regs(&port->dev, crb, comp_map); + if (!comp_map->hdm_decoder.valid) { + dev_err(&port->dev, "HDM decoder registers invalid\n"); + return -ENXIO; + } + + cpd->regs.hdm_decoder = crb + comp_map->hdm_decoder.offset; + + return 0; +} + +static u64 get_decoder_size(void __iomem *hdm_decoder, int n) +{ + u32 ctrl = readl(hdm_decoder + CXL_HDM_DECODER0_CTRL_OFFSET(n)); + + if (ctrl & CXL_HDM_DECODER0_CTRL_COMMITTED) + return 0; + + return ioread64_hi_lo(hdm_decoder + + CXL_HDM_DECODER0_SIZE_LOW_OFFSET(n)); +} + +static bool is_endpoint_port(struct cxl_port *port) +{ + /* Endpoints can't be ports... yet! */ + return false; +} + +static void rescan_ports(struct work_struct *work) +{ + if (bus_rescan_devices(&cxl_bus_type)) + pr_warn("Failed to rescan\n"); +} + +/* Minor layering violation */ +static int dvsec_range_used(struct cxl_port *port) +{ + struct cxl_endpoint_dvsec_info *info; + int i, ret = 0; + + if (!is_endpoint_port(port)) + return 0; + + info = port->data; + for (i = 0; i < info->ranges; i++) + if (info->range[i].size) + ret |= 1 << i; + + return ret; +} + +static int enumerate_hdm_decoders(struct cxl_port *port, + struct cxl_port_data *portdata) +{ + void __iomem *hdm_decoder = portdata->regs.hdm_decoder; + bool global_enable; + u32 global_ctrl; + int i = 0; + + global_ctrl = readl(hdm_decoder + CXL_HDM_DECODER_CTRL_OFFSET); + global_enable = global_ctrl & CXL_HDM_DECODER_ENABLE; + if (!global_enable) { + i = dvsec_range_used(port); + if (i) { + dev_err(&port->dev, + "Couldn't add port because device is using DVSEC range registers\n"); + return -EBUSY; + } + } + + for (i = 0; i < portdata->caps.count; i++) { + int rc, target_count = portdata->caps.tc; + struct cxl_decoder *cxld; + int *target_map = NULL; + u64 size; + + if (is_endpoint_port(port)) + target_count = 0; + + cxld = cxl_decoder_alloc(port, target_count); + if (IS_ERR(cxld)) { + dev_warn(&port->dev, + "Failed to allocate the decoder\n"); + return PTR_ERR(cxld); + } + + cxld->target_type = CXL_DECODER_EXPANDER; + cxld->interleave_ways = 1; + cxld->interleave_granularity = 0; + + size = get_decoder_size(hdm_decoder, i); + if (size != 0) { +#define decoderN(reg, n) hdm_decoder + CXL_HDM_DECODER0_##reg(n) + int temp[CXL_DECODER_MAX_INTERLEAVE]; + u64 base; + u32 ctrl; + int j; + union { + u64 value; + char target_id[8]; + } target_list; + + target_map = temp; + ctrl = readl(decoderN(CTRL_OFFSET, i)); + base = ioread64_hi_lo(decoderN(BASE_LOW_OFFSET, i)); + + cxld->decoder_range = (struct range){ + .start = base, + .end = base + size - 1 + }; + + cxld->flags = CXL_DECODER_F_ENABLE; + cxld->interleave_ways = to_interleave_ways(ctrl); + cxld->interleave_granularity = + to_interleave_granularity(ctrl); + + if (FIELD_GET(CXL_HDM_DECODER0_CTRL_TYPE, ctrl) == 0) + cxld->target_type = CXL_DECODER_ACCELERATOR; + + target_list.value = ioread64_hi_lo(decoderN(TL_LOW, i)); + for (j = 0; j < cxld->interleave_ways; j++) + target_map[j] = target_list.target_id[j]; +#undef decoderN + } + + rc = cxl_decoder_add_locked(cxld, target_map); + if (rc) + put_device(&cxld->dev); + else + rc = cxl_decoder_autoremove(&port->dev, cxld); + if (rc) + dev_err(&port->dev, "Failed to add decoder\n"); + else + dev_dbg(&cxld->dev, "Added to port %s\n", + dev_name(&port->dev)); + } + + /* + * Turn on global enable now since DVSEC ranges aren't being used and + * we'll eventually want the decoder enabled. + */ + if (!global_enable) { + dev_dbg(&port->dev, "Enabling HDM decode\n"); + writel(global_ctrl | CXL_HDM_DECODER_ENABLE, hdm_decoder + CXL_HDM_DECODER_CTRL_OFFSET); + } + + return 0; +} + +static int cxl_port_probe(struct device *dev) +{ + struct cxl_port *port = to_cxl_port(dev); + struct cxl_port_data *portdata; + void __iomem *crb; + int rc; + + if (port->component_reg_phys == CXL_RESOURCE_NONE) + return 0; + + portdata = devm_kzalloc(dev, sizeof(*portdata), GFP_KERNEL); + if (!portdata) + return -ENOMEM; + + crb = devm_cxl_iomap_block(&port->dev, port->component_reg_phys, + CXL_COMPONENT_REG_BLOCK_SIZE); + if (!crb) { + dev_err(&port->dev, "No component registers mapped\n"); + return -ENXIO; + } + + rc = map_regs(port, crb, portdata); + if (rc) + return rc; + + get_caps(port, portdata); + if (portdata->caps.count == 0) { + dev_err(&port->dev, "Spec violation. Caps invalid\n"); + return -ENXIO; + } + + rc = enumerate_hdm_decoders(port, portdata); + if (rc) { + dev_err(&port->dev, "Couldn't enumerate decoders (%d)\n", rc); + return rc; + } + + /* + * Bus rescan is done in a workqueue so that we can do so with the + * device lock dropped. + * + * Why do we need to rescan? There is a race between cxl_acpi and + * cxl_mem (which depends on cxl_pci). cxl_mem will only create a port + * if it can establish a path up to a root port, which is enumerated by + * a platform specific driver (ie. cxl_acpi) and bound by this driver. + * While cxl_acpi could do the rescan, it makes sense to be here as + * other platform drivers might require the same functionality. + */ + INIT_WORK(&port->rescan_work, rescan_ports); + queue_work(cxl_port_wq, &port->rescan_work); + + return 0; +} + +static struct cxl_driver cxl_port_driver = { + .name = "cxl_port", + .probe = cxl_port_probe, + .id = CXL_DEVICE_PORT, +}; + +static __init int cxl_port_init(void) +{ + int rc; + + cxl_port_wq = alloc_ordered_workqueue("cxl_port", 0); + if (!cxl_port_wq) + return -ENOMEM; + + rc = cxl_driver_register(&cxl_port_driver); + if (rc) { + destroy_workqueue(cxl_port_wq); + return rc; + } + + return 0; +} + +static __exit void cxl_port_exit(void) +{ + destroy_workqueue(cxl_port_wq); + cxl_driver_unregister(&cxl_port_driver); +} + +module_init(cxl_port_init); +module_exit(cxl_port_exit); +MODULE_LICENSE("GPL v2"); +MODULE_IMPORT_NS(CXL); +MODULE_ALIAS_CXL(CXL_DEVICE_PORT); From patchwork Sat Nov 20 00:02:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557484 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HwtzH5Gjvz9t0k for ; Sat, 20 Nov 2021 11:03:23 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236325AbhKTAGX (ORCPT ); Fri, 19 Nov 2021 19:06:23 -0500 Received: from mga12.intel.com ([192.55.52.136]:5724 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236343AbhKTAGN (ORCPT ); Fri, 19 Nov 2021 19:06:13 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542419" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542419" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:03:00 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088411" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:03:00 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 21/23] cxl: Unify port enumeration for decoders Date: Fri, 19 Nov 2021 16:02:48 -0800 Message-Id: <20211120000250.1663391-22-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The port driver exists to do proper enumeration of the component registers for ports, including HDM decoder resources. Any port which follows the CXL specification to implement HDM decoder registers should be handled by the port driver. This includes host bridge registers that are currently handled within the cxl_acpi driver. In moving the responsibility from cxl_acpi to cxl_port, three primary things are accomplished here: 1. Multi-port host bridges are now handled by the port driver 2. Single port host bridges are handled by the port driver 3. Single port switches without component registers will be handled by the port driver. While it's tempting to remove decoder APIs from cxl_core entirely, it is still required that platform specific drivers are able to add decoders which aren't specified in CXL 2.0+. An example of this is the CFMWS parsing which is implementing in cxl_acpi. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron --- Changes since RFCv2: - Renamed subject - Reworded commit message - Handle *all* host bridge port enumeration in cxl_port (Dan) - Handle passthrough decoding in cxl_port --- drivers/cxl/acpi.c | 41 +++----------------------------- drivers/cxl/core/bus.c | 6 +++-- drivers/cxl/cxl.h | 2 ++ drivers/cxl/port.c | 54 +++++++++++++++++++++++++++++++++++++++++- 4 files changed, 62 insertions(+), 41 deletions(-) diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index c12e4fed7941..c85a04ecbf7f 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -210,8 +210,6 @@ static int add_host_bridge_uport(struct device *match, void *arg) struct acpi_device *bridge = to_cxl_host_bridge(host, match); struct acpi_pci_root *pci_root; struct cxl_walk_context ctx; - int single_port_map[1], rc; - struct cxl_decoder *cxld; struct cxl_dport *dport; struct cxl_port *port; @@ -245,43 +243,9 @@ static int add_host_bridge_uport(struct device *match, void *arg) return -ENODEV; if (ctx.error) return ctx.error; - if (ctx.count > 1) - return 0; - /* TODO: Scan CHBCR for HDM Decoder resources */ - - /* - * Per the CXL specification (8.2.5.12 CXL HDM Decoder Capability - * Structure) single ported host-bridges need not publish a decoder - * capability when a passthrough decode can be assumed, i.e. all - * transactions that the uport sees are claimed and passed to the single - * dport. Disable the range until the first CXL region is enumerated / - * activated. - */ - cxld = cxl_decoder_alloc(port, 1); - if (IS_ERR(cxld)) - return PTR_ERR(cxld); - - cxld->interleave_ways = 1; - cxld->interleave_granularity = PAGE_SIZE; - cxld->target_type = CXL_DECODER_EXPANDER; - cxld->platform_res = (struct resource)DEFINE_RES_MEM(0, 0); - - device_lock(&port->dev); - dport = list_first_entry(&port->dports, typeof(*dport), list); - device_unlock(&port->dev); - - single_port_map[0] = dport->port_id; - - rc = cxl_decoder_add(cxld, single_port_map); - if (rc) - put_device(&cxld->dev); - else - rc = cxl_decoder_autoremove(host, cxld); - - if (rc == 0) - dev_dbg(host, "add: %s\n", dev_name(&cxld->dev)); - return rc; + /* Host bridge ports are enumerated by the port driver. */ + return 0; } struct cxl_chbs_context { @@ -448,3 +412,4 @@ module_platform_driver(cxl_acpi_driver); MODULE_LICENSE("GPL v2"); MODULE_IMPORT_NS(CXL); MODULE_IMPORT_NS(ACPI); +MODULE_SOFTDEP("pre: cxl_port"); diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 46a06cfe79bd..acfa212eea21 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -62,7 +62,7 @@ void cxl_unregister_topology_host(struct device *host) } EXPORT_SYMBOL_NS_GPL(cxl_unregister_topology_host, CXL); -static struct device *get_cxl_topology_host(void) +struct device *get_cxl_topology_host(void) { down_read(&topology_host_sem); if (cxl_topology_host) @@ -70,12 +70,14 @@ static struct device *get_cxl_topology_host(void) up_read(&topology_host_sem); return NULL; } +EXPORT_SYMBOL_NS_GPL(get_cxl_topology_host, CXL); -static void put_cxl_topology_host(struct device *dev) +void put_cxl_topology_host(struct device *dev) { WARN_ON(dev != cxl_topology_host); up_read(&topology_host_sem); } +EXPORT_SYMBOL_NS_GPL(put_cxl_topology_host, CXL); static int decoder_match(struct device *dev, void *data) { diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 24fa16157d5e..f8354241c5a3 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -170,6 +170,8 @@ void __iomem *devm_cxl_iomap_block(struct device *dev, resource_size_t addr, int cxl_register_topology_host(struct device *host); void cxl_unregister_topology_host(struct device *host); +struct device *get_cxl_topology_host(void); +void put_cxl_topology_host(struct device *dev); /* * cxl_decoder flags that define the type of memory / devices this diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c index 3c03131517af..7a1fc726fe9f 100644 --- a/drivers/cxl/port.c +++ b/drivers/cxl/port.c @@ -233,12 +233,64 @@ static int enumerate_hdm_decoders(struct cxl_port *port, return 0; } +/* + * Per the CXL specification (8.2.5.12 CXL HDM Decoder Capability Structure) + * single ported host-bridges need not publish a decoder capability when a + * passthrough decode can be assumed, i.e. all transactions that the uport sees + * are claimed and passed to the single dport. Disable the range until the first + * CXL region is enumerated / activated. + */ +static int add_passthrough_decoder(struct cxl_port *port) +{ + int single_port_map[1], rc; + struct cxl_decoder *cxld; + struct cxl_dport *dport; + + device_lock_assert(&port->dev); + + cxld = cxl_decoder_alloc(port, 1); + if (IS_ERR(cxld)) + return PTR_ERR(cxld); + + cxld->interleave_ways = 1; + cxld->interleave_granularity = PAGE_SIZE; + cxld->target_type = CXL_DECODER_EXPANDER; + cxld->platform_res = (struct resource)DEFINE_RES_MEM(0, 0); + + dport = list_first_entry(&port->dports, typeof(*dport), list); + single_port_map[0] = dport->port_id; + + rc = cxl_decoder_add_locked(cxld, single_port_map); + if (rc) + put_device(&cxld->dev); + else + rc = cxl_decoder_autoremove(&port->dev, cxld); + + if (rc == 0) + dev_dbg(&port->dev, "add: %s\n", dev_name(&cxld->dev)); + + return rc; +} + static int cxl_port_probe(struct device *dev) { struct cxl_port *port = to_cxl_port(dev); struct cxl_port_data *portdata; void __iomem *crb; - int rc; + int rc = 0; + + if (list_is_singular(&port->dports)) { + struct device *host_dev = get_cxl_topology_host(); + + /* + * Root ports (single host bridge downstream) are handled by + * platform driver + */ + if (port->uport != host_dev) + rc = add_passthrough_decoder(port); + put_cxl_topology_host(host_dev); + return rc; + } if (port->component_reg_phys == CXL_RESOURCE_NONE) return 0; From patchwork Sat Nov 20 00:02:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557481 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HwtzG4Snhz9t0G for ; Sat, 20 Nov 2021 11:03:22 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236415AbhKTAGV (ORCPT ); Fri, 19 Nov 2021 19:06:21 -0500 Received: from mga12.intel.com ([192.55.52.136]:5724 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236249AbhKTAGM (ORCPT ); Fri, 19 Nov 2021 19:06:12 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542421" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542421" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:03:01 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088415" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:03:00 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 22/23] cxl/mem: Introduce cxl_mem driver Date: Fri, 19 Nov 2021 16:02:49 -0800 Message-Id: <20211120000250.1663391-23-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add a driver that is capable of determining whether a device is in a CXL.mem routed part of the topology. This driver allows a higher level driver - such as one controlling CXL regions, which is itself a set of CXL devices - to easily determine if the CXL devices are CXL.mem capable by checking if the driver has bound. CXL memory device services may also be provided by this driver though none are needed as of yet. cxl_mem also plays the part of registering itself as an endpoint port, which is a required step to enumerate the device's HDM decoder resources. Even though cxl_mem driver is the only consumer of the new cxl_scan_ports() introduced in cxl_core, because that functionality has PCIe specificity it is kept out of this driver. As part of this patch, find_dport_by_dev() is promoted to the cxl_core's set of APIs for use by the new driver. Signed-off-by: Ben Widawsky --- Changes since RFCv2: - Squashed this in with previous patch - Reworked parent port finding - Added DVSEC check - Added wait for active --- .../driver-api/cxl/memory-devices.rst | 9 + drivers/cxl/Kconfig | 15 ++ drivers/cxl/Makefile | 2 + drivers/cxl/acpi.c | 22 +- drivers/cxl/core/Makefile | 1 + drivers/cxl/core/bus.c | 135 +++++++++++- drivers/cxl/core/core.h | 3 + drivers/cxl/core/memdev.c | 2 +- drivers/cxl/core/pci.c | 119 +++++++++++ drivers/cxl/cxl.h | 8 +- drivers/cxl/cxlmem.h | 3 + drivers/cxl/mem.c | 192 ++++++++++++++++++ drivers/cxl/pci.h | 4 + drivers/cxl/port.c | 12 +- tools/testing/cxl/Kbuild | 1 + 15 files changed, 503 insertions(+), 25 deletions(-) create mode 100644 drivers/cxl/core/pci.c create mode 100644 drivers/cxl/mem.c diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst index fbf0393cdddc..b4ff5f209c34 100644 --- a/Documentation/driver-api/cxl/memory-devices.rst +++ b/Documentation/driver-api/cxl/memory-devices.rst @@ -28,6 +28,9 @@ CXL Memory Device .. kernel-doc:: drivers/cxl/pci.c :internal: +.. kernel-doc:: drivers/cxl/mem.c + :doc: cxl mem + CXL Port -------- .. kernel-doc:: drivers/cxl/port.c @@ -47,6 +50,12 @@ CXL Core .. kernel-doc:: drivers/cxl/core/bus.c :identifiers: +.. kernel-doc:: drivers/cxl/core/pci.c + :doc: cxl core pci + +.. kernel-doc:: drivers/cxl/core/pci.c + :identifiers: + .. kernel-doc:: drivers/cxl/core/pmem.c :doc: cxl pmem diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 3aeb33bba5a3..f5553443ba2a 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -30,6 +30,21 @@ config CXL_PCI If unsure say 'm'. +config CXL_MEM + tristate "CXL.mem: Memory Devices" + default CXL_BUS + help + The CXL.mem protocol allows a device to act as a provider of + "System RAM" and/or "Persistent Memory" that is fully coherent + as if the memory was attached to the typical CPU memory controller. + This is known as HDM "Host-managed Device Memory". + + Say 'y/m' to enable a driver that will attach to CXL.mem devices for + memory expansion and control of HDM. See Chapter 9.13 in the CXL 2.0 + specification for a detailed description of HDM. + + If unsure say 'm'. + config CXL_MEM_RAW_COMMANDS bool "RAW Command Interface for Memory Devices" depends on CXL_PCI diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile index 56fcac2323cb..ce267ef11d93 100644 --- a/drivers/cxl/Makefile +++ b/drivers/cxl/Makefile @@ -1,10 +1,12 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_CXL_BUS) += core/ obj-$(CONFIG_CXL_PCI) += cxl_pci.o +obj-$(CONFIG_CXL_MEM) += cxl_mem.o obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o obj-$(CONFIG_CXL_PMEM) += cxl_pmem.o obj-$(CONFIG_CXL_PORT) += cxl_port.o +cxl_mem-y := mem.o cxl_pci-y := pci.o cxl_acpi-y := acpi.o cxl_pmem-y := pmem.o diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index c85a04ecbf7f..d17aa7d8b1c9 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -171,21 +171,6 @@ __mock int match_add_root_ports(struct pci_dev *pdev, void *data) return 0; } -static struct cxl_dport *find_dport_by_dev(struct cxl_port *port, struct device *dev) -{ - struct cxl_dport *dport; - - device_lock(&port->dev); - list_for_each_entry(dport, &port->dports, list) - if (dport->dport == dev) { - device_unlock(&port->dev); - return dport; - } - - device_unlock(&port->dev); - return NULL; -} - __mock struct acpi_device *to_cxl_host_bridge(struct device *host, struct device *dev) { @@ -216,13 +201,14 @@ static int add_host_bridge_uport(struct device *match, void *arg) if (!bridge) return 0; - dport = find_dport_by_dev(root_port, match); + dport = cxl_find_dport_by_dev(root_port, match); if (!dport) { dev_dbg(host, "host bridge expected and not found\n"); return 0; } - port = devm_cxl_add_port(match, dport->component_reg_phys, root_port); + port = devm_cxl_add_port(match, dport->component_reg_phys, root_port, + NULL); if (IS_ERR(port)) return PTR_ERR(port); dev_dbg(host, "%s: add: %s\n", dev_name(match), dev_name(&port->dev)); @@ -360,7 +346,7 @@ static int cxl_acpi_probe(struct platform_device *pdev) if (rc) return rc; - root_port = devm_cxl_add_port(host, CXL_RESOURCE_NONE, root_port); + root_port = devm_cxl_add_port(host, CXL_RESOURCE_NONE, root_port, NULL); if (IS_ERR(root_port)) return PTR_ERR(root_port); dev_dbg(host, "add: %s\n", dev_name(&root_port->dev)); diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 40ab50318daf..5b8ec478fb0b 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -7,3 +7,4 @@ cxl_core-y += pmem.o cxl_core-y += regs.o cxl_core-y += memdev.o cxl_core-y += mbox.o +cxl_core-y += pci.o diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index acfa212eea21..cab3aabd5abc 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "core.h" /** @@ -436,7 +437,7 @@ static int devm_cxl_link_uport(struct device *host, struct cxl_port *port) static struct cxl_port *cxl_port_alloc(struct device *uport, resource_size_t component_reg_phys, - struct cxl_port *parent_port) + struct cxl_port *parent_port, void *data) { struct cxl_port *port; struct device *dev; @@ -465,6 +466,7 @@ static struct cxl_port *cxl_port_alloc(struct device *uport, port->uport = uport; port->component_reg_phys = component_reg_phys; + port->data = data; ida_init(&port->decoder_ida); INIT_LIST_HEAD(&port->dports); @@ -485,16 +487,17 @@ static struct cxl_port *cxl_port_alloc(struct device *uport, * @uport: "physical" device implementing this upstream port * @component_reg_phys: (optional) for configurable cxl_port instances * @parent_port: next hop up in the CXL memory decode hierarchy + * @data: opaque data to be used by the port driver */ struct cxl_port *devm_cxl_add_port(struct device *uport, resource_size_t component_reg_phys, - struct cxl_port *parent_port) + struct cxl_port *parent_port, void *data) { struct device *dev, *host; struct cxl_port *port; int rc; - port = cxl_port_alloc(uport, component_reg_phys, parent_port); + port = cxl_port_alloc(uport, component_reg_phys, parent_port, data); if (IS_ERR(port)) return port; @@ -531,6 +534,113 @@ struct cxl_port *devm_cxl_add_port(struct device *uport, } EXPORT_SYMBOL_NS_GPL(devm_cxl_add_port, CXL); +static int add_upstream_port(struct device *host, struct pci_dev *pdev) +{ + struct device *dev = &pdev->dev; + struct cxl_port *parent_port; + struct cxl_register_map map; + struct cxl_port *port; + int rc; + + /* A port is useless if there are no component registers */ + rc = cxl_find_regblock(pdev, CXL_REGLOC_RBI_COMPONENT, &map); + if (rc) + return rc; + + parent_port = find_parent_cxl_port(pdev); + if (!parent_port) + return -ENODEV; + + if (!parent_port->dev.driver) { + dev_dbg(dev, "Upstream port has no driver\n"); + put_device(&parent_port->dev); + return -ENODEV; + } + + port = devm_cxl_add_port(dev, cxl_reg_block(pdev, &map), parent_port, + NULL); + put_device(&parent_port->dev); + if (IS_ERR(port)) + dev_err(dev, "Failed to add upstream port %ld\n", + PTR_ERR(port)); + else + dev_dbg(dev, "Added CXL port\n"); + + return rc; +} + +static int add_downstream_port(struct pci_dev *pdev) +{ + resource_size_t creg = CXL_RESOURCE_NONE; + struct device *dev = &pdev->dev; + struct cxl_port *parent_port; + struct cxl_register_map map; + u32 lnkcap, port_num; + int rc; + + /* + * Ports are to be scanned from top down. Therefore, the upstream port + * must already exist. + */ + parent_port = find_parent_cxl_port(pdev); + if (!parent_port) + return -ENODEV; + + if (!parent_port->dev.driver) { + dev_dbg(dev, "Host port to dport has no driver\n"); + put_device(&parent_port->dev); + return -ENODEV; + } + + rc = cxl_find_regblock(pdev, CXL_REGLOC_RBI_COMPONENT, &map); + if (rc) + creg = CXL_RESOURCE_NONE; + else + creg = cxl_reg_block(pdev, &map); + + if (pci_read_config_dword(pdev, pci_pcie_cap(pdev) + PCI_EXP_LNKCAP, + &lnkcap) != PCIBIOS_SUCCESSFUL) + return 1; + port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap); + + rc = cxl_add_dport(parent_port, dev, port_num, creg, false); + put_device(&parent_port->dev); + if (rc) + dev_err(dev, "Failed to add downstream port to %s\n", + dev_name(&parent_port->dev)); + else + dev_dbg(dev, "Added downstream port to %s\n", + dev_name(&parent_port->dev)); + + return rc; +} + +static int match_add_ports(struct pci_dev *pdev, void *data) +{ + struct device *dev = &pdev->dev; + struct device *host = data; + + if (is_cxl_switch_usp((dev))) + return add_upstream_port(host, pdev); + else if (is_cxl_switch_dsp((dev))) + return add_downstream_port(pdev); + else + return 0; +} + +/** + * cxl_scan_ports() - Adds all ports for the subtree beginning with @dport + * @dport: Beginning node of the CXL topology + */ +void cxl_scan_ports(struct cxl_dport *dport) +{ + struct device *d = dport->dport; + struct pci_dev *pdev = to_pci_dev(d); + + pci_walk_bus(pdev->bus, match_add_ports, &dport->port->dev); +} +EXPORT_SYMBOL_NS_GPL(cxl_scan_ports, CXL); + static struct cxl_dport *find_dport(struct cxl_port *port, int id) { struct cxl_dport *dport; @@ -614,6 +724,23 @@ int cxl_add_dport(struct cxl_port *port, struct device *dport_dev, int port_id, } EXPORT_SYMBOL_NS_GPL(cxl_add_dport, CXL); +struct cxl_dport *cxl_find_dport_by_dev(struct cxl_port *port, + struct device *dev) +{ + struct cxl_dport *dport; + + device_lock(&port->dev); + list_for_each_entry(dport, &port->dports, list) + if (dport->dport == dev) { + device_unlock(&port->dev); + return dport; + } + + device_unlock(&port->dev); + return NULL; +} +EXPORT_SYMBOL_NS_GPL(cxl_find_dport_by_dev, CXL); + static int decoder_populate_targets(struct cxl_decoder *cxld, struct cxl_port *port, int *target_map) { @@ -858,6 +985,8 @@ static int cxl_device_id(struct device *dev) return CXL_DEVICE_NVDIMM; if (dev->type == &cxl_port_type) return CXL_DEVICE_PORT; + if (dev->type == &cxl_memdev_type) + return CXL_DEVICE_MEMORY_EXPANDER; return 0; } diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index e0c9aacc4e9c..c5836f071eaa 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -6,6 +6,7 @@ extern const struct device_type cxl_nvdimm_bridge_type; extern const struct device_type cxl_nvdimm_type; +extern const struct device_type cxl_memdev_type; extern struct attribute_group cxl_base_attribute_group; @@ -20,4 +21,6 @@ void cxl_memdev_exit(void); void cxl_mbox_init(void); void cxl_mbox_exit(void); +struct cxl_port *find_parent_cxl_port(struct pci_dev *pdev); + #endif /* __CXL_CORE_H__ */ diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 61029cb7ac62..149665fd2d3f 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -127,7 +127,7 @@ static const struct attribute_group *cxl_memdev_attribute_groups[] = { NULL, }; -static const struct device_type cxl_memdev_type = { +const struct device_type cxl_memdev_type = { .name = "cxl_memdev", .release = cxl_memdev_release, .devnode = cxl_memdev_devnode, diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c new file mode 100644 index 000000000000..818e30571e4d --- /dev/null +++ b/drivers/cxl/core/pci.c @@ -0,0 +1,119 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2021 Intel Corporation. All rights reserved. */ +#include +#include +#include +#include +#include "core.h" + +/** + * DOC: cxl core pci + * + * Compute Express Link protocols are layered on top of PCIe. CXL core provides + * a set of helpers for CXL interactions which occur via PCIe. + */ + +/** + * find_parent_cxl_port() - Finds parent port through PCIe mechanisms + * @pdev: PCIe USP or DSP to find an upstream port for + * + * Once all CXL ports are enumerated, there is no need to reference the PCIe + * parallel universe as all downstream ports are contained in a linked list, and + * all upstream ports are accessible via pointer. During the enumeration, it is + * very convenient to be able to peak up one level in the hierarchy without + * needing the established relationship between data structures so that the + * parenting can be done as the ports/dports are created. + * + * A reference is kept to the found port. + */ +struct cxl_port *find_parent_cxl_port(struct pci_dev *pdev) +{ + struct device *parent_dev, *gparent_dev; + + /* Parent is either a downstream port, or root port */ + parent_dev = get_device(pdev->dev.parent); + + if (is_cxl_switch_usp(&pdev->dev)) { + if (dev_WARN_ONCE(&pdev->dev, + pci_pcie_type(pdev) != + PCI_EXP_TYPE_DOWNSTREAM && + pci_pcie_type(pdev) != + PCI_EXP_TYPE_ROOT_PORT, + "Parent not downstream\n")) + goto err; + + /* + * Grandparent is either an upstream port or a platform device that has + * been added as a cxl_port already. + */ + gparent_dev = get_device(parent_dev->parent); + put_device(parent_dev); + + return to_cxl_port(gparent_dev); + } else if (is_cxl_switch_dsp(&pdev->dev)) { + if (dev_WARN_ONCE(&pdev->dev, + pci_pcie_type(pdev) != PCI_EXP_TYPE_UPSTREAM, + "Parent not upstream")) + goto err; + return to_cxl_port(parent_dev); + } + +err: + dev_WARN(&pdev->dev, "Invalid topology\n"); + put_device(parent_dev); + return NULL; +} + +/* + * Unlike endpoints, switches don't discern CXL.mem capability. Simply finding + * the DVSEC is sufficient. + */ +static bool is_cxl_switch(struct pci_dev *pdev) +{ + return pci_find_dvsec_capability(pdev, PCI_DVSEC_VENDOR_ID_CXL, + CXL_DVSEC_PORT_EXTENSIONS); +} + +/** + * is_cxl_switch_usp() - Is the device a CXL.mem enabled switch + * @dev: Device to query for switch type + * + * If the device is a CXL.mem capable upstream switch port return true; + * otherwise return false. + */ +bool is_cxl_switch_usp(struct device *dev) +{ + struct pci_dev *pdev; + + if (!dev_is_pci(dev)) + return false; + + pdev = to_pci_dev(dev); + + return pci_is_pcie(pdev) && + pci_pcie_type(pdev) == PCI_EXP_TYPE_UPSTREAM && + is_cxl_switch(pdev); +} +EXPORT_SYMBOL_NS_GPL(is_cxl_switch_usp, CXL); + +/** + * is_cxl_switch_dsp() - Is the device a CXL.mem enabled switch + * @dev: Device to query for switch type + * + * If the device is a CXL.mem capable downstream switch port return true; + * otherwise return false. + */ +bool is_cxl_switch_dsp(struct device *dev) +{ + struct pci_dev *pdev; + + if (!dev_is_pci(dev)) + return false; + + pdev = to_pci_dev(dev); + + return pci_is_pcie(pdev) && + pci_pcie_type(pdev) == PCI_EXP_TYPE_DOWNSTREAM && + is_cxl_switch(pdev); +} +EXPORT_SYMBOL_NS_GPL(is_cxl_switch_dsp, CXL); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index f8354241c5a3..3bda806f4244 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -296,6 +296,7 @@ struct cxl_port { * @port: reference to cxl_port that contains this downstream port * @list: node for a cxl_port's list of cxl_dport instances * @root_port_link: node for global list of root ports + * @data: Opaque data passed by other drivers, used by port driver */ struct cxl_dport { struct device *dport; @@ -304,16 +305,20 @@ struct cxl_dport { struct cxl_port *port; struct list_head list; struct list_head root_port_link; + void *data; }; struct cxl_port *to_cxl_port(struct device *dev); struct cxl_port *devm_cxl_add_port(struct device *uport, resource_size_t component_reg_phys, - struct cxl_port *parent_port); + struct cxl_port *parent_port, void *data); +void cxl_scan_ports(struct cxl_dport *root_port); int cxl_add_dport(struct cxl_port *port, struct device *dport, int port_id, resource_size_t component_reg_phys, bool root_port); struct cxl_dport *cxl_get_root_dport(struct device *dev); +struct cxl_dport *cxl_find_dport_by_dev(struct cxl_port *port, + struct device *dev); struct cxl_decoder *to_cxl_decoder(struct device *dev); bool is_root_decoder(struct device *dev); @@ -349,6 +354,7 @@ void cxl_driver_unregister(struct cxl_driver *cxl_drv); #define CXL_DEVICE_NVDIMM_BRIDGE 1 #define CXL_DEVICE_NVDIMM 2 #define CXL_DEVICE_PORT 3 +#define CXL_DEVICE_MEMORY_EXPANDER 4 #define MODULE_ALIAS_CXL(type) MODULE_ALIAS("cxl:t" __stringify(type) "*") #define CXL_MODALIAS_FMT "cxl:t%d" diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index b1d753541f4e..de8f6fce74b5 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -35,12 +35,15 @@ * @cdev: char dev core object for ioctl operations * @cxlds: The device state backing this device * @id: id number of this memdev instance. + * @component_reg_phys: register base of component registers + * @root_port: Hostbridge's root port connected to this endpoint */ struct cxl_memdev { struct device dev; struct cdev cdev; struct cxl_dev_state *cxlds; int id; + struct cxl_dport *root_port; }; static inline struct cxl_memdev *to_cxl_memdev(struct device *dev) diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c new file mode 100644 index 000000000000..e954144af4b8 --- /dev/null +++ b/drivers/cxl/mem.c @@ -0,0 +1,192 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2021 Intel Corporation. All rights reserved. */ +#include +#include +#include + +#include "cxlmem.h" +#include "pci.h" + +/** + * DOC: cxl mem + * + * CXL memory endpoint devices and switches are CXL capable devices that are + * participating in CXL.mem protocol. Their functionality builds on top of the + * CXL.io protocol that allows enumerating and configuring components via + * standard PCI mechanisms. + * + * The cxl_mem driver owns kicking off the enumeration of this CXL.mem + * capability. With the detection of a CXL capable endpoint, the driver will + * walk up to find the platform specific port it is connected to, and determine + * if there are intervening switches in the path. If there are switches, a + * secondary action to enumerate those (implemented in cxl_core). Finally the + * cxl_mem driver will add the device it is bound to as a CXL port for use in + * higher level operations. + */ + +struct walk_ctx { + struct cxl_dport *root_port; + bool has_switch; +}; + +/** + * walk_to_root_port() - Walk up to root port + * @dev: Device to walk up from + * @ctx: Information to populate while walking + * + * A platform specific driver such as cxl_acpi is responsible for scanning CXL + * topologies in a top-down fashion. If the CXL memory device is directly + * connected to the top level hostbridge, nothing else needs to be done. If + * however there are CXL components (ie. a CXL switch) in between an endpoint + * and a hostbridge the platform specific driver must be notified after all the + * components are enumerated. + */ +static void walk_to_root_port(struct device *dev, struct walk_ctx *ctx) +{ + struct cxl_dport *root_port; + + if (!dev->parent) + return; + + root_port = cxl_get_root_dport(dev); + if (root_port) + ctx->root_port = root_port; + + if (is_cxl_switch_usp(dev)) + ctx->has_switch = true; + + walk_to_root_port(dev->parent, ctx); +} + +static void remove_endpoint(void *_cxlmd) +{ + struct cxl_memdev *cxlmd = _cxlmd; + + if (cxlmd->root_port) + sysfs_remove_link(&cxlmd->dev.kobj, "root_port"); +} + +static int wait_for_media(struct cxl_memdev *cxlmd) +{ + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_endpoint_dvsec_info *info = cxlds->info; + int rc; + + if (!info) + return -ENXIO; + + if (!info->mem_enabled) + return -EBUSY; + + rc = cxlds->wait_media_ready(cxlds); + if (rc) + return rc; + + /* + * We know the device is active, and enabled, if any ranges are non-zero + * we'll need to check later before adding the port since that owns the + * HDM decoder registers. + */ + return 0; +} + +static int create_endpoint(struct device *dev, struct cxl_port *parent, + struct cxl_dport *dport) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_endpoint_dvsec_info *info = cxlds->info; + struct cxl_port *endpoint; + int rc; + + endpoint = + devm_cxl_add_port(dev, cxlds->component_reg_phys, parent, info); + if (IS_ERR(endpoint)) + return PTR_ERR(endpoint); + + rc = sysfs_create_link(&cxlmd->dev.kobj, &dport->dport->kobj, + "root_port"); + if (rc) { + device_del(&endpoint->dev); + return rc; + } + dev_dbg(dev, "add: %s\n", dev_name(&endpoint->dev)); + + return devm_add_action_or_reset(dev, remove_endpoint, cxlmd); +} + +static int cxl_mem_probe(struct device *dev) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_port *hostbridge, *parent_port; + struct walk_ctx ctx = { NULL, false }; + struct cxl_dport *dport; + int rc; + + rc = wait_for_media(cxlmd); + if (rc) { + dev_err(dev, "Media not active (%d)\n", rc); + return rc; + } + + walk_to_root_port(dev, &ctx); + + /* + * Couldn't find a CXL capable root port. This may happen even with a + * CXL capable topology if cxl_acpi hasn't completed yet. A rescan will + * occur. + */ + if (!ctx.root_port) + return -ENODEV; + + hostbridge = ctx.root_port->port; + device_lock(&hostbridge->dev); + + /* hostbridge has no port driver, the topology isn't enabled yet */ + if (!hostbridge->dev.driver) { + device_unlock(&hostbridge->dev); + return -ENODEV; + } + + /* No switch + found root port means we're done */ + if (!ctx.has_switch) { + parent_port = to_cxl_port(&hostbridge->dev); + dport = ctx.root_port; + goto out; + } + + /* Walk down from the root port and add all switches */ + cxl_scan_ports(ctx.root_port); + + /* If parent is a dport the endpoint is good to go. */ + parent_port = to_cxl_port(dev->parent->parent); + dport = cxl_find_dport_by_dev(parent_port, dev->parent); + if (!dport) { + rc = -ENODEV; + goto err_out; + } + +out: + rc = create_endpoint(dev, parent_port, dport); + if (rc) + goto err_out; + + cxlmd->root_port = ctx.root_port; + +err_out: + device_unlock(&hostbridge->dev); + return rc; +} + +static struct cxl_driver cxl_mem_driver = { + .name = "cxl_mem", + .probe = cxl_mem_probe, + .id = CXL_DEVICE_MEMORY_EXPANDER, +}; + +module_cxl_driver(cxl_mem_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_IMPORT_NS(CXL); +MODULE_ALIAS_CXL(CXL_DEVICE_MEMORY_EXPANDER); +MODULE_SOFTDEP("pre: cxl_port"); diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h index 2a48cd65bf59..3fd0909522f2 100644 --- a/drivers/cxl/pci.h +++ b/drivers/cxl/pci.h @@ -15,6 +15,7 @@ /* CXL 2.0 8.1.3: PCIe DVSEC for CXL Device */ #define CXL_DVSEC_PCIE_DEVICE 0 + #define CXL_DVSEC_PCIE_DEVICE_CAP_OFFSET 0xA #define CXL_DVSEC_PCIE_DEVICE_MEM_CAPABLE BIT(2) #define CXL_DVSEC_PCIE_DEVICE_HDM_COUNT_MASK GENMASK(5, 4) @@ -64,4 +65,7 @@ enum cxl_regloc_type { ((resource_size_t)(pci_resource_start(pdev, (map)->barno) + \ (map)->block_offset)) +bool is_cxl_switch_usp(struct device *dev); +bool is_cxl_switch_dsp(struct device *dev); + #endif /* __CXL_PCI_H__ */ diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c index 7a1fc726fe9f..02b1c8cf7567 100644 --- a/drivers/cxl/port.c +++ b/drivers/cxl/port.c @@ -108,8 +108,16 @@ static u64 get_decoder_size(void __iomem *hdm_decoder, int n) static bool is_endpoint_port(struct cxl_port *port) { - /* Endpoints can't be ports... yet! */ - return false; + /* + * It's tempting to just check list_empty(port->dports) here, but this + * might get called before dports are setup for a port. + */ + + if (!port->uport->driver) + return false; + + return to_cxl_drv(port->uport->driver)->id == + CXL_DEVICE_MEMORY_EXPANDER; } static void rescan_ports(struct work_struct *work) diff --git a/tools/testing/cxl/Kbuild b/tools/testing/cxl/Kbuild index 1acdf2fc31c5..4c2359772f3c 100644 --- a/tools/testing/cxl/Kbuild +++ b/tools/testing/cxl/Kbuild @@ -30,6 +30,7 @@ cxl_core-y += $(CXL_CORE_SRC)/pmem.o cxl_core-y += $(CXL_CORE_SRC)/regs.o cxl_core-y += $(CXL_CORE_SRC)/memdev.o cxl_core-y += $(CXL_CORE_SRC)/mbox.o +cxl_core-y += $(CXL_CORE_SRC)/pci.o cxl_core-y += config_check.o cxl_core-y += mock_pmem.o From patchwork Sat Nov 20 00:02:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1557480 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HwtzG28ZKz9sS8 for ; Sat, 20 Nov 2021 11:03:22 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236359AbhKTAGV (ORCPT ); Fri, 19 Nov 2021 19:06:21 -0500 Received: from mga12.intel.com ([192.55.52.136]:5726 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236328AbhKTAGN (ORCPT ); Fri, 19 Nov 2021 19:06:13 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10173"; a="214542422" X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="214542422" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:03:01 -0800 X-IronPort-AV: E=Sophos;i="5.87,248,1631602800"; d="scan'208";a="496088419" Received: from jfaistl-mobl1.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.139.58]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 16:03:01 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 23/23] cxl/mem: Disable switch hierarchies for now Date: Fri, 19 Nov 2021 16:02:50 -0800 Message-Id: <20211120000250.1663391-24-ben.widawsky@intel.com> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211120000250.1663391-1-ben.widawsky@intel.com> References: <20211120000250.1663391-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Switches aren't supported by the region driver yet. If a device finds itself under a switch it will not bind a driver so that it cannot be used later for region creation/configuration. Signed-off-by: Ben Widawsky --- drivers/cxl/mem.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index e954144af4b8..997898e78d63 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -155,6 +155,11 @@ static int cxl_mem_probe(struct device *dev) goto out; } + /* FIXME: Add true switch support */ + dev_err(dev, "Devices behind switches are currently unsupported\n"); + rc = -ENODEV; + goto err_out; + /* Walk down from the root port and add all switches */ cxl_scan_ports(ctx.root_port);