diff mbox series

[RFC,v3,04/16] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints

Message ID 20210111225121.820014-5-ben.widawsky@intel.com
State New
Headers show
Series [RFC,v3,01/16] docs: cxl: Add basic documentation | expand

Commit Message

Ben Widawsky Jan. 11, 2021, 10:51 p.m. UTC
From: Dan Williams <dan.j.williams@intel.com>

The CXL.mem protocol allows a device to act as a provider of "System
RAM" and/or "Persistent Memory" that is fully coherent as if the memory
was attached to the typical CPU memory controller.

With the CXL-2.0 specification a PCI endpoint can implement a "Type-3"
device interface and give the operating system control over "Host
Managed Device Memory". See section 2.3 Type 3 CXL Device.

The memory range exported by the device may optionally be described by
the platform firmware memory map, or by infrastructure like LIBNVDIMM to
provision persistent memory capacity from one, or more, CXL.mem devices.

A pre-requisite for Linux-managed memory-capacity provisioning is this
cxl_mem driver that can speak the mailbox protocol defined in section
8.2.8.4 Mailbox Registers.

For now just land the driver boiler-plate and fill it in with
functionality in subsequent commits.

Link: https://www.computeexpresslink.org/download-the-specification
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
---
 Documentation/cxl/memory-devices.rst |  9 ++++
 drivers/cxl/Kconfig                  | 22 +++++++++
 drivers/cxl/Makefile                 |  2 +
 drivers/cxl/mem.c                    | 69 ++++++++++++++++++++++++++++
 drivers/cxl/pci.h                    | 20 ++++++++
 5 files changed, 122 insertions(+)
 create mode 100644 drivers/cxl/mem.c
 create mode 100644 drivers/cxl/pci.h

Comments

Randy Dunlap Jan. 12, 2021, 7:08 a.m. UTC | #1
On 1/11/21 2:51 PM, Ben Widawsky wrote:
> diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> index 68da926ba5b1..0ac5080cd6e0 100644
> --- a/drivers/cxl/Kconfig
> +++ b/drivers/cxl/Kconfig
> @@ -33,4 +33,26 @@ config CXL_ACPI
>  	  specification.
>  
>  	  If unsure say 'm'
> +
> +config CXL_MEM
> +        tristate "CXL.mem: Endpoint Support"
> +        depends on PCI && CXL_BUS_PROVIDER
> +        default CXL_BUS_PROVIDER
> +        help
> +          The CXL.mem protocol allows a device to act as a provider of
> +          "System RAM" and/or "Persistent Memory" that is fully coherent
> +          as if the memory was attached to the typical CPU memory
> +          controller.

All of those indented lines should use one tab (tristate, depends, default,
and help). The help text should be indented with one tab + 2 spaces,
like the lines below are.

> +
> +	  Say 'y/m' to enable a driver (named "cxl_mem.ko" when built as
> +	  a module) that will attach to CXL.mem devices for
> +	  configuration, provisioning, and health monitoring. This
> +	  driver is required for dynamic provisioning of CXL.mem
> +	  attached memory which is a pre-requisite for persistent memory

patch 2 spells this as:              prerequisite
which is preferred IMO.

> +	  support. Typically volatile memory is mapped by platform
> +	  firmware and included in the platform memory map, but in some
> +	  cases the OS is responsible for mapping that memory. See
> +	  Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification.
> +
> +          If unsure say 'm'.

That last line needs one tab + 2 spaces for indentation.

>  endif
Jonathan Cameron Jan. 12, 2021, 7:01 p.m. UTC | #2
On Mon, 11 Jan 2021 14:51:08 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> From: Dan Williams <dan.j.williams@intel.com>
> 
> The CXL.mem protocol allows a device to act as a provider of "System
> RAM" and/or "Persistent Memory" that is fully coherent as if the memory
> was attached to the typical CPU memory controller.
> 
> With the CXL-2.0 specification a PCI endpoint can implement a "Type-3"
> device interface and give the operating system control over "Host
> Managed Device Memory". See section 2.3 Type 3 CXL Device.
> 
> The memory range exported by the device may optionally be described by
> the platform firmware memory map, or by infrastructure like LIBNVDIMM to
> provision persistent memory capacity from one, or more, CXL.mem devices.
> 
> A pre-requisite for Linux-managed memory-capacity provisioning is this
> cxl_mem driver that can speak the mailbox protocol defined in section
> 8.2.8.4 Mailbox Registers.
> 
> For now just land the driver boiler-plate and fill it in with
> functionality in subsequent commits.
> 
> Link: https://www.computeexpresslink.org/download-the-specification
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>

Just one passing comment inline.

> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> new file mode 100644
> index 000000000000..005404888942
> --- /dev/null
> +++ b/drivers/cxl/mem.c
> @@ -0,0 +1,69 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +#include <linux/io.h>
> +#include "acpi.h"
> +#include "pci.h"
> +
> +static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)

Is it worth pulling this out to a utility library now as we are going
to keep needing this for CXL devices?
Arguably, with a vendor_id parameter it might make sense to have
it as a utility function for pci rather than CXL alone.

> +{
> +	int pos;
> +
> +	pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DVSEC);
> +	if (!pos)
> +		return 0;
> +
> +	while (pos) {
> +		u16 vendor, id;
> +
> +		pci_read_config_word(pdev, pos + PCI_DVSEC_VENDOR_ID_OFFSET,
> +				     &vendor);
> +		pci_read_config_word(pdev, pos + PCI_DVSEC_ID_OFFSET, &id);
> +		if (vendor == PCI_DVSEC_VENDOR_ID_CXL && dvsec == id)
> +			return pos;
> +
> +		pos = pci_find_next_ext_capability(pdev, pos,
> +						   PCI_EXT_CAP_ID_DVSEC);
> +	}
> +
> +	return 0;
> +}
> +
> +static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> +{
> +	struct device *dev = &pdev->dev;
> +	int rc, regloc;
> +
> +	rc = cxl_bus_acquire(pdev);
> +	if (rc != 0) {
> +		dev_err(dev, "failed to acquire interface\n");
> +		return rc;
> +	}
> +
> +	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC);
> +	if (!regloc) {
> +		dev_err(dev, "register location dvsec not found\n");
> +		return -ENXIO;
> +	}
> +
> +	return 0;
> +}
> +
> +static const struct pci_device_id cxl_mem_pci_tbl[] = {
> +	/* PCI class code for CXL.mem Type-3 Devices */
> +	{ PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
> +	  PCI_CLASS_MEMORY_CXL, 0xffffff, 0 },
> +	{ /* terminate list */ },
> +};
> +MODULE_DEVICE_TABLE(pci, cxl_mem_pci_tbl);
> +
> +static struct pci_driver cxl_mem_driver = {
> +	.name			= KBUILD_MODNAME,
> +	.id_table		= cxl_mem_pci_tbl,
> +	.probe			= cxl_mem_probe,
> +};
> +
> +MODULE_LICENSE("GPL v2");
> +module_pci_driver(cxl_mem_driver);
> +MODULE_IMPORT_NS(CXL);
> diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
> new file mode 100644
> index 000000000000..a8a9935fa90b
> --- /dev/null
> +++ b/drivers/cxl/pci.h
> @@ -0,0 +1,20 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
> +#ifndef __CXL_PCI_H__
> +#define __CXL_PCI_H__
> +
> +#define PCI_CLASS_MEMORY_CXL	0x050210
> +
> +/*
> + * See section 8.1 Configuration Space Registers in the CXL 2.0
> + * Specification
> + */
> +#define PCI_EXT_CAP_ID_DVSEC		0x23
> +#define PCI_DVSEC_VENDOR_ID_CXL		0x1E98
> +#define PCI_DVSEC_VENDOR_ID_OFFSET	0x4
> +#define PCI_DVSEC_ID_CXL		0x0
> +#define PCI_DVSEC_ID_OFFSET		0x8
> +
> +#define PCI_DVSEC_ID_CXL_REGLOC		0x8
> +
> +#endif /* __CXL_PCI_H__ */
Dan Williams Jan. 12, 2021, 8:06 p.m. UTC | #3
On Tue, Jan 12, 2021 at 11:03 AM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Mon, 11 Jan 2021 14:51:08 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
>
> > From: Dan Williams <dan.j.williams@intel.com>
> >
> > The CXL.mem protocol allows a device to act as a provider of "System
> > RAM" and/or "Persistent Memory" that is fully coherent as if the memory
> > was attached to the typical CPU memory controller.
> >
> > With the CXL-2.0 specification a PCI endpoint can implement a "Type-3"
> > device interface and give the operating system control over "Host
> > Managed Device Memory". See section 2.3 Type 3 CXL Device.
> >
> > The memory range exported by the device may optionally be described by
> > the platform firmware memory map, or by infrastructure like LIBNVDIMM to
> > provision persistent memory capacity from one, or more, CXL.mem devices.
> >
> > A pre-requisite for Linux-managed memory-capacity provisioning is this
> > cxl_mem driver that can speak the mailbox protocol defined in section
> > 8.2.8.4 Mailbox Registers.
> >
> > For now just land the driver boiler-plate and fill it in with
> > functionality in subsequent commits.
> >
> > Link: https://www.computeexpresslink.org/download-the-specification
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
>
> Just one passing comment inline.
>
> > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > new file mode 100644
> > index 000000000000..005404888942
> > --- /dev/null
> > +++ b/drivers/cxl/mem.c
> > @@ -0,0 +1,69 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
> > +#include <linux/module.h>
> > +#include <linux/pci.h>
> > +#include <linux/io.h>
> > +#include "acpi.h"
> > +#include "pci.h"
> > +
> > +static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
>
> Is it worth pulling this out to a utility library now as we are going
> to keep needing this for CXL devices?
> Arguably, with a vendor_id parameter it might make sense to have
> it as a utility function for pci rather than CXL alone.

Sure, cxl_mem_dvsec() can move to a central location, but I'd wait for
the first incremental user to split it out.
diff mbox series

Patch

diff --git a/Documentation/cxl/memory-devices.rst b/Documentation/cxl/memory-devices.rst
index 6ce88f9d5f4f..134c9b6b4ff4 100644
--- a/Documentation/cxl/memory-devices.rst
+++ b/Documentation/cxl/memory-devices.rst
@@ -23,6 +23,15 @@  ACPI CXL
 .. kernel-doc:: drivers/cxl/acpi.c
    :internal:
 
+CXL Memory Device
+-----------------
+
+.. kernel-doc:: drivers/cxl/mem.c
+   :doc: cxl mem
+
+.. kernel-doc:: drivers/cxl/mem.c
+   :internal:
+
 External Interfaces
 ===================
 
diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
index 68da926ba5b1..0ac5080cd6e0 100644
--- a/drivers/cxl/Kconfig
+++ b/drivers/cxl/Kconfig
@@ -33,4 +33,26 @@  config CXL_ACPI
 	  specification.
 
 	  If unsure say 'm'
+
+config CXL_MEM
+        tristate "CXL.mem: Endpoint Support"
+        depends on PCI && CXL_BUS_PROVIDER
+        default CXL_BUS_PROVIDER
+        help
+          The CXL.mem protocol allows a device to act as a provider of
+          "System RAM" and/or "Persistent Memory" that is fully coherent
+          as if the memory was attached to the typical CPU memory
+          controller.
+
+	  Say 'y/m' to enable a driver (named "cxl_mem.ko" when built as
+	  a module) that will attach to CXL.mem devices for
+	  configuration, provisioning, and health monitoring. This
+	  driver is required for dynamic provisioning of CXL.mem
+	  attached memory which is a pre-requisite for persistent memory
+	  support. Typically volatile memory is mapped by platform
+	  firmware and included in the platform memory map, but in some
+	  cases the OS is responsible for mapping that memory. See
+	  Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification.
+
+          If unsure say 'm'.
 endif
diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile
index d38cd34a2582..97fdffb00f2d 100644
--- a/drivers/cxl/Makefile
+++ b/drivers/cxl/Makefile
@@ -1,5 +1,7 @@ 
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o
+obj-$(CONFIG_CXL_MEM) += cxl_mem.o
 
 ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL
 cxl_acpi-y := acpi.o
+cxl_mem-y := mem.o
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
new file mode 100644
index 000000000000..005404888942
--- /dev/null
+++ b/drivers/cxl/mem.c
@@ -0,0 +1,69 @@ 
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/io.h>
+#include "acpi.h"
+#include "pci.h"
+
+static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
+{
+	int pos;
+
+	pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DVSEC);
+	if (!pos)
+		return 0;
+
+	while (pos) {
+		u16 vendor, id;
+
+		pci_read_config_word(pdev, pos + PCI_DVSEC_VENDOR_ID_OFFSET,
+				     &vendor);
+		pci_read_config_word(pdev, pos + PCI_DVSEC_ID_OFFSET, &id);
+		if (vendor == PCI_DVSEC_VENDOR_ID_CXL && dvsec == id)
+			return pos;
+
+		pos = pci_find_next_ext_capability(pdev, pos,
+						   PCI_EXT_CAP_ID_DVSEC);
+	}
+
+	return 0;
+}
+
+static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct device *dev = &pdev->dev;
+	int rc, regloc;
+
+	rc = cxl_bus_acquire(pdev);
+	if (rc != 0) {
+		dev_err(dev, "failed to acquire interface\n");
+		return rc;
+	}
+
+	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC);
+	if (!regloc) {
+		dev_err(dev, "register location dvsec not found\n");
+		return -ENXIO;
+	}
+
+	return 0;
+}
+
+static const struct pci_device_id cxl_mem_pci_tbl[] = {
+	/* PCI class code for CXL.mem Type-3 Devices */
+	{ PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
+	  PCI_CLASS_MEMORY_CXL, 0xffffff, 0 },
+	{ /* terminate list */ },
+};
+MODULE_DEVICE_TABLE(pci, cxl_mem_pci_tbl);
+
+static struct pci_driver cxl_mem_driver = {
+	.name			= KBUILD_MODNAME,
+	.id_table		= cxl_mem_pci_tbl,
+	.probe			= cxl_mem_probe,
+};
+
+MODULE_LICENSE("GPL v2");
+module_pci_driver(cxl_mem_driver);
+MODULE_IMPORT_NS(CXL);
diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
new file mode 100644
index 000000000000..a8a9935fa90b
--- /dev/null
+++ b/drivers/cxl/pci.h
@@ -0,0 +1,20 @@ 
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
+#ifndef __CXL_PCI_H__
+#define __CXL_PCI_H__
+
+#define PCI_CLASS_MEMORY_CXL	0x050210
+
+/*
+ * See section 8.1 Configuration Space Registers in the CXL 2.0
+ * Specification
+ */
+#define PCI_EXT_CAP_ID_DVSEC		0x23
+#define PCI_DVSEC_VENDOR_ID_CXL		0x1E98
+#define PCI_DVSEC_VENDOR_ID_OFFSET	0x4
+#define PCI_DVSEC_ID_CXL		0x0
+#define PCI_DVSEC_ID_OFFSET		0x8
+
+#define PCI_DVSEC_ID_CXL_REGLOC		0x8
+
+#endif /* __CXL_PCI_H__ */