[v9,25/25] MPIPL: Add documentation
diff mbox series

Message ID 20190712111802.23560-26-hegdevasant@linux.vnet.ibm.com
State Accepted
Headers show
Series
  • MPIPL support
Related show

Checks

Context Check Description
snowpatch_ozlabs/snowpatch_job_snowpatch-skiboot-dco success Signed-off-by present
snowpatch_ozlabs/snowpatch_job_snowpatch-skiboot success Test snowpatch/job/snowpatch-skiboot on branch master
snowpatch_ozlabs/apply_patch success Successfully applied on branch master (4db38a36b31045f0a116d388ddeac850b38c8680)

Commit Message

Vasant Hegde July 12, 2019, 11:18 a.m. UTC
Document MPIPL device tree and OPAL APIs.

Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Signed-off-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
---
 doc/device-tree/ibm,opal/dump.rst      |  37 ++++++++
 doc/index.rst                          |   1 +
 doc/mpipl.rst                          |  48 ++++++++++
 doc/opal-api/index.rst                 |   6 ++
 doc/opal-api/opal-cec-reboot-6-116.rst |   2 +
 doc/opal-api/opal-mpipl-173-174.rst    | 156 +++++++++++++++++++++++++++++++++
 6 files changed, 250 insertions(+)
 create mode 100644 doc/device-tree/ibm,opal/dump.rst
 create mode 100644 doc/mpipl.rst
 create mode 100644 doc/opal-api/opal-mpipl-173-174.rst

Patch
diff mbox series

diff --git a/doc/device-tree/ibm,opal/dump.rst b/doc/device-tree/ibm,opal/dump.rst
new file mode 100644
index 000000000..81a09ce26
--- /dev/null
+++ b/doc/device-tree/ibm,opal/dump.rst
@@ -0,0 +1,37 @@ 
+.. _device-tree/ibm,opal/dump:
+
+Dump (MPIPL) Device Tree Binding
+=================================
+
+See :ref:`mpipl` for general MPIPL information.
+
+dump node
+---------
+.. code-block:: dts
+
+	dump {
+                /*
+                 * Memory used by OPAL to load kernel/initrd from PNOR
+                 * (KERNEL_LOAD_BASE & INITRAMFS_LOAD_BASE). This is the
+                 * temporary memory used by OPAL during boot. Later Linux
+                 * kernel is free to use this memory. During MPIPL boot
+                 * also OPAL will overwrite this memory.
+                 *
+                 * OPAL will advertise these memory details to kernel.
+                 * If kernel is using these memory and needs these memory
+                 * content for proper dump creation, then it has to reserve
+                 * destination memory to preserve these memory ranges.
+                 * Also kernel should pass this detail during registration.
+                 * During MPIPL firmware will take care of preserving memory
+                 * and post MPIPL kernel can create proper dump.
+                 */
+		fw-load-area = <0x0 0x20000000 0x0 0x8000000 0x0 0x28000000 0x0 0x8000000>;
+                /* Compatible property */
+		compatible = "ibm,opal-dump";
+		phandle = <0x98>;
+                /*
+                 * This property indicates that its MPIPL boot. Kernel will use OPAL API
+                 * to retrieve metadata tags and use metadata to create dump.
+                 */
+                mpipl-boot
+	};
diff --git a/doc/index.rst b/doc/index.rst
index 79a5accf2..f21a658f9 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -47,6 +47,7 @@  Developer Guide and Internals
    xive
    imc
    power-management
+   mpipl
 
 
 OPAL ABI
diff --git a/doc/mpipl.rst b/doc/mpipl.rst
new file mode 100644
index 000000000..b00d336a0
--- /dev/null
+++ b/doc/mpipl.rst
@@ -0,0 +1,48 @@ 
+.. _mpipl:
+
+MPIPL (aka FADUMP) Overview
+===========================
+
+Memory Preserving Initial Program Load (MPIPL) is a Power feature where the
+contents of memory are preserved while the system reboots after a failure.
+This is accomplished by the firmware/OS publishing ranges of memory to be
+preserved across boots.
+
+Registration
+------------
+In the OPAL context, OPAL and host Linux communicate the memory ranges to be
+preserved via source descriptor tables in the HDAT (MDST and MDDT table inside
+SPIRAH). Host Linux can register/unregister using OPAL_MPIPL_UPDATE API (see
+:ref:`opal-api-mpipl`).
+
+Initiating dump
+---------------
+Whenever Linux crashes, it makes reboot2 OPAL call with type as MPIPL. (see
+:ref:`opal-api-cec-reboot`). Depending on sevice processor type OPAL makes
+appropriate call to initiate MPIPL. On FSP system we call `attn` instruction
+(see ``__trigger_attn()``) and on BMC system we call SBE `S0 interrupt`
+(see ``p9_sbe_terminate()``).
+
+Dump collection
+---------------
+Hostboot then re-IPLs the machine taking care to copy over contents of the
+source memory to a alternate memory locations as specified in descriptor table.
+Hostboot publishes this information in the result descriptor tables (MDRT table
+inside SPIRAH structure). The success/failure of the copy is indicated by a
+results table.
+
+SBE/Hostboot also does the requisite procedures to gather hardware register
+states for all active threads at the time of the crash.
+
+MPIPL boot
+----------
+On MPIPL boot, OPAL adds device tree entry (``/ibm,opal/dump/mpipl-boot``)
+to indicate its MPIPL boot. Kernel will use OPAL_MPIPL_QUERY_TAG API
+(:ref:`opal-api-mpipl`) to retrieve metadata tag. Kernel then uses its
+existing logic (kdump/fadump) to write out a core dump of OPAL and Linux
+kernel in a format that GDB and crash can understand.
+
+Device tree
+-----------
+We create new device tree node (``/ibm,opal/dump``) to pass dump details to Linux
+kernel from OPAL (see :ref:`device-tree/ibm,opal/dump`).
diff --git a/doc/opal-api/index.rst b/doc/opal-api/index.rst
index b2fe942c7..96ce62a28 100644
--- a/doc/opal-api/index.rst
+++ b/doc/opal-api/index.rst
@@ -384,6 +384,12 @@  The OPAL API is the interface between an Operating System and OPAL.
 +---------------------------------------------+--------------+------------------------+----------+-----------------+
 | :ref:`OPAL_NPU_MEM_RELEASE`                 | 172          | Future, likely 6.4     |          |                 |
 +---------------------------------------------+--------------+------------------------+----------+-----------------+
+| :ref:`OPAL_MPIPL_UPDATE`                    | 173          | Future, likely 6.4     | POWER9   |                 |
++---------------------------------------------+--------------+------------------------+----------+-----------------+
+| :ref:`OPAL_MPIPL_REGISTER_TAG`              | 174          | Future, likely 6.4     | POWER9   |                 |
++---------------------------------------------+--------------+------------------------+----------+-----------------+
+| :ref:`OPAL_MPIPL_QUERY_TAG`                 | 175          | Future, likely 6.4     | POWER9   |                 |
++---------------------------------------------+--------------+------------------------+----------+-----------------+
 
 .. toctree::
    :maxdepth: 1
diff --git a/doc/opal-api/opal-cec-reboot-6-116.rst b/doc/opal-api/opal-cec-reboot-6-116.rst
index 9a5c79446..431098f3b 100644
--- a/doc/opal-api/opal-cec-reboot-6-116.rst
+++ b/doc/opal-api/opal-cec-reboot-6-116.rst
@@ -1,3 +1,5 @@ 
+.. _opal-api-cec-reboot:
+
 OPAL_CEC_REBOOT and OPAL_CEC_REBOOT2
 ====================================
 
diff --git a/doc/opal-api/opal-mpipl-173-174.rst b/doc/opal-api/opal-mpipl-173-174.rst
new file mode 100644
index 000000000..fa275693f
--- /dev/null
+++ b/doc/opal-api/opal-mpipl-173-174.rst
@@ -0,0 +1,156 @@ 
+.. _opal-api-mpipl:
+
+OPAL MPIPL APIs
+===============
+
+.. code-block:: c
+
+   #define OPAL_MPIPL_UPDATE                      173
+   #define OPAL_MPIPL_REGISTER_TAG                174
+   #define OPAL_MPIPL_QUERY_TAG                   175
+
+These calls are used for MPIPL (Memory Preserving Initial Program Load).
+
+It is an OPTIONAL part of the OPAL spec.
+
+If a platform supports MPIPL, then we will have "/ibm,opal/dump" node in
+device tree (see :ref:`device-tree/ibm,opal/dump`).
+
+.. _OPAL_MPIPL_UPDATE:
+
+OPAL_MPIPL_UPDATE
+==================
+Linux kernel will use this call to register/unregister MPIPL.
+
+.. code-block:: c
+
+   #define OPAL_MPIPL_UPDATE                      173
+
+   int64_t opal_mpipl_update(enum mpipl_ops ops, u64 src, u64 dest, u64 size)
+
+   /* MPIPL update operations */
+   enum mpipl_ops {
+        OPAL_MPIPL_ADD_RANGE            = 0,
+        OPAL_MPIPL_REMOVE_RANGE         = 1,
+        OPAL_MPIPL_REMOVE_ALL           = 2,
+        OPAL_MPIPL_FREE_PRESERVED_MEMORY= 3,
+   };
+
+ops :
+-----
+  OPAL_MPIPL_ADD_RANGE
+    Add new entry to MPIPL table. Kernel will send src, dest and size.
+    During MPIPL content from source address is moved to destination address.
+      src  = Source start address
+      dest = Destination start address
+      size = size
+
+  OPAL_MPIPL_REMOVE_RANGE
+    Remove kernel requested entry from MPIPL table.
+      src  = Source start address
+      dest = Destination start address
+      size = ignore
+
+  OPAL_MPIPL_REMOVE_ALL
+    Remove all kernel passed entry from MPIPL table.
+      src  = ignore
+      dest = ignore
+      size = ignore
+
+  OPAL_MPIPL_FREE_PRESERVED_MEMORY
+    Post MPIPL, kernel will indicate OPAL that it has processed dump and
+    it can clear/release metadata area.
+      src  = ignore
+      dest = ignore
+      size = ignore
+
+Return Values
+-------------
+
+``OPAL_SUCCESS``
+  Operation success
+
+``OPAL_PARAMETER``
+  Invalid parameter
+
+``OPAL_RESOURCE``
+  Ran out of space in MDST/MDDT table to add new entry
+
+``OPAL_HARDWARE``
+  Platform does not support fadump
+
+
+.. _OPAL_MPIPL_REGISTER_TAG:
+
+OPAL_MPIPL_REGISTER_TAG
+=======================
+Kernel will use this API to register tags during MPIPL registration.
+It expects OPAL to preserve these tags across MPIPL. Post MPIPL Linux
+kernel will use `opal_mpipl_query_tag` call to retrieve these tags.
+
+.. code-block:: c
+  opal_mpipl_register_tag(enum opal_mpipl_tags tag, uint64_t tag_val)
+
+  tag:
+   OPAL_MPIPL_TAG_KERNEL
+     During first boot, kernel will setup its metadata area and asks
+     OPAL to preserve metadata area pointer across MPIPL. Post MPIPL
+     kernel requests OPAL to provide metadata pointer and it will use
+     that pointer to retrieve metadata and create dump.
+
+   OPAL_MPIPL_TAG_BOOT_MEM
+     During MPIPL registration kernel will specify how much memory
+     firmware can use for Post MPIPL load. Post MPIPL petitboot kernel
+     will query for this tag to get boot memory size.
+
+Return Values
+-------------
+``OPAL_SUCCESS``
+  Operation success
+
+``OPAL_PARAMETER``
+  Invalid parameter
+
+.. _OPAL_MPIPL_QUERY_TAG:
+
+OPAL_MPIPL_QUERY_TAG
+====================
+Post MPIPL linux kernel will call this API to get metadata tag. And use this
+tag to retrieve metadata information and generate dump.
+
+.. code-block:: c
+
+   #define OPAL_MPIPL_QUERY_TAG                 175
+
+   uint64_t opal_mpipl_query_tag(enum opal_mpipl_tags tag, uint64_t *tag_val)
+
+   enum opal_mpipl_tags {
+        OPAL_MPIPL_TAG_CPU      = 0,
+        OPAL_MPIPL_TAG_OPAL     = 1,
+        OPAL_MPIPL_TAG_KERNEL   = 2,
+        OPAL_MPIPL_TAG_BOOT_MEM = 3,
+   };
+
+  tag :
+     OPAL_MPIPL_TAG_CPU
+       Pointer to CPU register data content metadata area
+     OPAL_MPIPL_TAG_OPAL
+       Pointer to OPAL metadata area
+     OPAL_MPIPL_TAG_KERNEL
+       During first boot, kernel will setup its metadata area and asks
+       OPAL to preserve metadata area pointer across MPIPL. Post MPIPL
+       kernel calls this API to get metadata pointer and it will use
+       that pointer to retrieve metadata and create dump.
+     OPAL_MPIPL_TAG_BOOT_MEM
+       During MPIPL registration kernel will specify how much memory
+       firmware can use for Post MPIPL load. Post MPIPL petitboot kernel
+       will query for this tag to get boot memory size.
+
+Return Values
+-------------
+
+``OPAL_SUCCESS``
+  Operation success
+
+``OPAL_PARAMETER``
+  Invalid parameter