mbox series

[RFC,v3,0/6] Extend the reserved PMP entries

Message ID 20251130111643.1291462-1-peter.lin@sifive.com
Headers show
Series Extend the reserved PMP entries | expand

Message

Yu-Chien Peter Lin Nov. 30, 2025, 11:16 a.m. UTC
This series extends OpenSBI to support multiple reserved PMP entries
that platforms can configure for critical memory protection needs.

Key characteristics of reserved PMP entries:

- Have highest priority
- Available in ToR mode for platform-specific use cases
- Persistent across domain context switches (cannot be disabled)
- Support runtime allocation through a dedicated allocator API

Motivation:

Reserved PMP entries address the need to protect memory regions that
cannot be covered by domain-managed PMP entries. For example, platforms
can enforce PMA_UNSAFE regions [1] parsed from the device tree. These
regions often cannot be precisely covered by one or two NAPOT entries,
so using reserved entries allocated in ToR mode optimizes PMP usage.

Additionally, reserved entries remain unchanged across domain transitions
and persist until hart reset, ensure consistent protections.

Use case demonstration:

This series includes a demonstration on the SiFive FU540 platform, which
uses a reserved PMP entry to protect the memory region at 0x0-0x1000
during early boot. This serves as a reference implementation showing how
platforms can leverage the reserved PMP allocator.

Changes v2->v3:
- Instead of using reserved-pmp-count DT property, this version adds
  sbi_platform_reserved_pmp_count() to determine the reserved PMP count

[1] https://lore.kernel.org/all/20251113014656.2605447-20-samuel.holland@sifive.com/

Yu-Chien Peter Lin (6):
  include: sbi: sbi_platform: add sbi_platform_reserved_pmp_count()
  lib: sbi_init: print total and reserved PMP counts
  lib: sbi: riscv_asm: support reserved PMP allocator
  lib: sbi: sbi_hart: extend PMP handling to support multiple reserved
    entries
  lib: sbi: sbi_init: call sbi_hart_init() earlier
  [TEMP] demonstrate hole protection using reserved PMP

 include/sbi/riscv_asm.h         |  6 +++
 include/sbi/sbi_hart.h          | 15 ------
 include/sbi/sbi_platform.h      | 35 +++++++++++++
 lib/sbi/riscv_asm.c             | 92 +++++++++++++++++++++++++++++++++
 lib/sbi/sbi_domain_context.c    |  6 ++-
 lib/sbi/sbi_hart.c              | 57 ++++++++++++++------
 lib/sbi/sbi_init.c              | 15 +++---
 platform/generic/sifive/fu540.c | 56 ++++++++++++++++++++
 8 files changed, 243 insertions(+), 39 deletions(-)

Comments

Anup Patel Feb. 11, 2026, 3:29 p.m. UTC | #1
On Sun, Nov 30, 2025 at 4:46 PM Yu-Chien Peter Lin <peter.lin@sifive.com> wrote:
>
> This series extends OpenSBI to support multiple reserved PMP entries
> that platforms can configure for critical memory protection needs.
>
> Key characteristics of reserved PMP entries:
>
> - Have highest priority
> - Available in ToR mode for platform-specific use cases
> - Persistent across domain context switches (cannot be disabled)
> - Support runtime allocation through a dedicated allocator API
>
> Motivation:
>
> Reserved PMP entries address the need to protect memory regions that
> cannot be covered by domain-managed PMP entries. For example, platforms
> can enforce PMA_UNSAFE regions [1] parsed from the device tree. These
> regions often cannot be precisely covered by one or two NAPOT entries,
> so using reserved entries allocated in ToR mode optimizes PMP usage.

ToR has its own downside because the next PMP entry marks the end of
the ToR PMP region so in many cases we might endup using more PMP
entries with ToR as compared to NAPOT.

For the optimal number of PMP entries, there is no clear winner in the
NAPOT vs ToR debate.

>
> Additionally, reserved entries remain unchanged across domain transitions
> and persist until hart reset, ensure consistent protections.
>
> Use case demonstration:
>
> This series includes a demonstration on the SiFive FU540 platform, which
> uses a reserved PMP entry to protect the memory region at 0x0-0x1000
> during early boot. This serves as a reference implementation showing how
> platforms can leverage the reserved PMP allocator.

Instead of the infrastructure added by this series, a platform can
simply add root memregions in sbi_platform_early_init() using
sbi_domain_root_add_memrange().

If the above is still not sufficient then the platform can have a separate
hook for reserved PMP entries as below (although I don't recommend it).

diff --git a/include/sbi/sbi_platform.h b/include/sbi/sbi_platform.h
index e65d9877..4ecc4582 100644
--- a/include/sbi/sbi_platform.h
+++ b/include/sbi/sbi_platform.h
@@ -149,6 +149,8 @@ struct sbi_platform_operations {
             unsigned long log2len);
     /** platform specific pmp disable on current HART */
     void (*pmp_disable)(unsigned int n);
+    /** platform specific way to update a PMP entry as reserved on
current HART */
+    bool (*pmp_update_reserved)(unsigned int n, bool skip_write);
 };

 /** Platform default per-HART stack size for exception/interrupt handling */
@@ -687,6 +689,23 @@ static inline void sbi_platform_pmp_disable(const
struct sbi_platform *plat,
         sbi_platform_ops(plat)->pmp_disable(n);
 }

+/**
+ * Platform specific way to update a PMP entry as reserved on current HART
+ *
+ * @param plat pointer to struct sbi_platform
+ * @param n index of the pmp entry
+ * @param skip_write flag indicating pmp entry must not be written
+ *
+ * @return true if a pmp entry is reserved and false otherwise
+ */
+static inline bool sbi_platform_pmp_update_reserved(const struct
sbi_platform *plat,
+                            unsigned int n, bool skip_write)
+{
+    if (plat && sbi_platform_ops(plat)->pmp_update_reserved)
+        return sbi_platform_ops(plat)->pmp_update_reserved(n, skip_write);
+    return false;
+}
+
 #endif

 #endif
diff --git a/lib/sbi/sbi_hart_pmp.c b/lib/sbi/sbi_hart_pmp.c
index be459129..27242113 100644
--- a/lib/sbi/sbi_hart_pmp.c
+++ b/lib/sbi/sbi_hart_pmp.c
@@ -120,6 +120,7 @@ static bool is_valid_pmp_idx(unsigned int
pmp_count, unsigned int pmp_idx)

 static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch)
 {
+    const struct sbi_platform *plat = sbi_platform_ptr(scratch);
     struct sbi_domain_memregion *reg;
     struct sbi_domain *dom = sbi_domain_thishart_ptr();
     unsigned int pmp_log2gran, pmp_bits;
@@ -147,6 +148,9 @@ static int sbi_hart_smepmp_configure(struct
sbi_scratch *scratch)
         /* Skip reserved entry */
         if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
             pmp_idx++;
+        while (sbi_platform_pmp_update_reserved(plat, pmp_idx, false))
+            pmp_idx++;
+
         if (!is_valid_pmp_idx(pmp_count, pmp_idx))
             return SBI_EFAIL;

@@ -190,6 +194,9 @@ static int sbi_hart_smepmp_configure(struct
sbi_scratch *scratch)
         /* Skip reserved entry */
         if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
             pmp_idx++;
+        while (sbi_platform_pmp_update_reserved(plat, pmp_idx, true))
+            pmp_idx++;
+
         if (!is_valid_pmp_idx(pmp_count, pmp_idx))
             return SBI_EFAIL;

@@ -255,6 +262,7 @@ static int sbi_hart_smepmp_unmap_range(struct
sbi_scratch *scratch,

 static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch)
 {
+    const struct sbi_platform *plat = sbi_platform_ptr(scratch);
     struct sbi_domain_memregion *reg;
     struct sbi_domain *dom = sbi_domain_thishart_ptr();
     unsigned long pmp_addr, pmp_addr_max;
@@ -269,6 +277,9 @@ static int sbi_hart_oldpmp_configure(struct
sbi_scratch *scratch)

     pmp_idx = 0;
     sbi_domain_for_each_memregion(dom, reg) {
+        while (sbi_platform_pmp_update_reserved(plat, pmp_idx, false))
+            pmp_idx++;
+
         if (!is_valid_pmp_idx(pmp_count, pmp_idx))
             return SBI_EFAIL;

Regards,
Anup