Commit graph

433 commits

Author SHA1 Message Date
Christoph Busold
28ceaaafbd arch: riscv: Support up to 64 PMP registers
The official version of the RISC-V privileged architecture
specification extends the number of supported PMP registers to 64.

Signed-off-by: Christoph Busold <cbusold@qti.qualcomm.com>
2026-04-15 05:50:45 -04:00
Anas Nashif
eb294b7a1e kernel: move userspace code to own folder
Isolate userspace code into userspace/.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2026-04-14 22:31:16 -04:00
Mirai SHINJO
02b5ab3c9e arch: riscv: remove unused stdio.h include
arch/riscv/core/thread.c does not use any stdio symbols.
Therefore, remove the unused include.

No functional change.

Signed-off-by: Mirai SHINJO <oss@mshinjo.com>
2026-04-03 23:13:10 +09:00
Camille BAUD
26af854046 arch: riscv: Fix RISC-V ECALLs
You need to only keep the exception number to compare it specifically.

Signed-off-by: Camille BAUD <mail@massdriver.space>
2026-03-23 15:02:26 -05:00
Rick Tsao
109ae98c1c arch: riscv: custom: add Andes StackSafe support for custom stack guard
Implement the custom stack guard using the Andes StackSafe hardware
stack protection. It triggers an exception on stack overflow when the
stack pointer exceeds the configured limit.

Signed-off-by: Rick Tsao <rick592@andestech.com>
2026-03-21 07:51:15 -05:00
Rick Tsao
a88b3c5453 arch: riscv: add custom stack guard
Add architecture-level support for a custom stack guard on RISC-V,
preventing stack overflow at the hardware level.

This framework allows vendors to implement the custom stack guard
using their own vendor-specific stack protection hardware, providing
flexibility for different RISC-V cores.

A new config option, CUTOM_STACK_GUARD, allows users to enable this
stack guard on supported RISC-V cores.

Signed-off-by: Rick Tsao <rick592@andestech.com>
2026-03-21 07:51:15 -05:00
Peter Marheine
742812e580 arch: riscv: optimize mcause decoding on interrupt
This reduces the typical number of instructions executed on interrupt by
one and saves an additional 3-4 instructions on syscall, by two
related optimizations.

 * The top bit of `mcause` indicates an interrupt, and the RISC-V ISA
   specification suggests checking the sign of `mcause` to separate
   interrupts from exceptions. Doing so saves one instruction in
   generating an intermediate value to compare against and comparing to
   zero instead. In the exception branch, this doesn't modify the
   temporary value and saves one instruction in not needing to reload
   with the value of `mcause`.
 * Loading a register with `CONFIG_RISCV_MCAUSE_EXCEPTION_MASK` and
   masking `mcause` with that requires two instructions at minimum, and
   three if the mask is too large to fit into a single instruction.
   Since the first optimization leaves the temporary value of `mcause`
   unmodified and it is known that the interrupt bit is clear after the
   branch to `is_interrupt`, reloading and masking the value of `mcause`
   can be skipped entirely.

Signed-off-by: Peter Marheine <pmarheine@chromium.org>
2026-03-19 14:48:34 -05:00
Peter Marheine
1d67c392a7 arch: riscv: remove unused RISCV_SOC_EXCEPTION_FROM_IRQ
This option was formerly enabled by sy1xx, but all supported socs now
appear to use the standard behavior so this support can be removed.

Signed-off-by: Peter Marheine <pmarheine@chromium.org>
2026-03-19 14:48:34 -05:00
Andy Lin
6cb74ad968 arch: riscv: Add -msave-restore option to reduce code footprint
Add `-msave-restore` option to reduce the code footprint
of function prologue and epilogue.

Signed-off-by: Andy Lin <andylinpersonal@gmail.com>
2026-03-16 10:07:57 -04:00
Jimmy Zheng
9e4def9ba1 arch: riscv: pmp: fix PMP stack guard failure when switch through ecall
When CONFIG_RISCV_ALWAYS_SWITCH_THROUGH_ECALL and CONFIG_PMP_STACK_GUARD
are enabled, the first context switch enables the stack guard
(mstatus.MPRV and MPP) in is_kernel_syscall. However, there is no proper
catch-all PMP entry during early kernel initialization.

This change uses CONFIG_PMP_KERNEL_MODE_DYNAMIC (selected by
CONFIG_MEM_ATT, CONFIG_PMP_NO_LOCK_GLOBAL, and CONFIG_PMP_STACK_GUARD) to
configure a catch-all PMP entry in pmp initialization.

Although a catch-all entry is not required when
CONFIG_RISCV_ALWAYS_SWITCH_THROUGH_ECALL is disabled, using it keeps the
PMP setup simpler and more consistent.

Signed-off-by: Jimmy Zheng <jimmyzhe@andestech.com>
2026-03-16 10:06:11 -04:00
Jisheng Zhang
9842b062bb cpuidle: optimize out weak stub function call for !TRACING
For !TRACING, most arch_cpu_idle and arch_cpu_atomic_idle implementation
relies on the fact that there's weak stub implementations in
subsys/tracing/tracing_none.c, this works, but the arch_cpu_idle sits in
hot code path, so we'd better to make it as efficient as possible.

Take the riscv implementation for example,
Before the patch:

80000a66 <arch_cpu_idle>:
80000a66:	1141                	addi	sp,sp,-16
80000a68:	c606                	sw	ra,12(sp)
80000a6a:	37c5                	jal	80000a4a <sys_trace_idle>
80000a6c:	10500073          	wfi
80000a70:	3ff1                	jal	80000a4c <sys_trace_idle_exit>
80000a72:	47a1                	li	a5,8
80000a74:	3007a073          	csrs	mstatus,a5
80000a78:	40b2                	lw	ra,12(sp)
80000a7a:	0141                	addi	sp,sp,16
80000a7c:	8082                	ret

NOTE: the sys_trace_idle and sys_trace_idle_exit are just stubs when
!TRACING

after the patch:
80000a62 <arch_cpu_idle>:
80000a62:	10500073          	wfi
80000a66:	47a1                	li	a5,8
80000a68:	3007a073          	csrs	mstatus,a5
80000a6c:	8082                	ret

Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
2026-03-11 23:17:29 -04:00
Camille BAUD
9a8265c28f arch: riscv: thead: fence is insufficient to fence data cache
Using only the fence instruction to gate the management of data in cache
is insufficient to prevent unordered access after flushing in some cases.
Gate dcache instructions like icache instructions.

Signed-off-by: Camille BAUD <mail@massdriver.space>
2026-03-09 14:20:55 +01:00
Pete Johanson
15ac638118 soc: adi: Don't enable built in barriers for MAX32 RV32 core
The MAX32 RV32 core does not implement the fence instruction used by the
RISC-V synchronization intrinsic, so don't enable the builtin barriers for
that target.

Signed-off-by: Pete Johanson <pete.johanson@analog.com>
2026-02-25 18:47:53 +01:00
William Markezana
81c77d59d1 arch: riscv: fix missing CONFIG_ prefix for RISCV_NO_MTVAL_ON_FP_TRAP
The ifdef guard in isr.S was written without the CONFIG_ prefix,
making the mtval fallback path dead code on all platforms including
QEMU (which previously worked via CONFIG_QEMU_TARGET).

Signed-off-by: William Markezana <william.markezana@gmail.com>
2026-02-23 11:36:28 +00:00
Mirai SHINJO
78718321e9 arch: riscv: coredump: add per-thread dump support
Select ARCH_SUPPORTS_COREDUMP_THREADS (if !SMP) and
ARCH_SUPPORTS_COREDUMP_STACK_PTR for RISC-V, and implement
arch_coredump_stack_ptr_get().

This enables CONFIG_DEBUG_COREDUMP_MEMORY_DUMP_THREADS and
CONFIG_DEBUG_COREDUMP_THREAD_STACK_TOP.

For non-current threads, return thread->callee_saved.sp.

For the faulting current thread in stack-top mode, return the
exception-time SP from z_riscv_get_sp_before_exc() (cached during
arch_coredump_info_dump()) instead of thread->callee_saved.sp,
which reflects switch-time state.

Signed-off-by: Mirai SHINJO <oss@mshinjo.com>
2026-02-18 10:31:33 +00:00
Mirai SHINJO
9473277fe2 arch: riscv: coredump: dump all 33 registers
Expand the RISC-V coredump register block to all 33 GDB registers
(x0-x31, pc) in register-number order.

Previously only 18 registers were serialized. Populate zero, sp, gp,
tp, s0, and s1-s11 (when available).

Bump ARCH_HDR_VER from 1 to 3 (RISC-V 32-bit layout) and from 2 to 4
(RISC-V 64-bit layout) for the new wire format.

Keep the RISC-V 32-bit block fixed at 33 fields on the RISC-V RV32E
profile; registers not implemented by RV32E remain zero-filled so
version 3 always has a stable size.

Signed-off-by: Mirai SHINJO <oss@mshinjo.com>
2026-02-18 10:31:33 +00:00
Alex Lyrakis
359dead234 riscv: pmp: add option to unlock ROM region for debugging
Add CONFIG_PMP_UNLOCK_ROM_FOR_DEBUG option to conditionally disable
the lock bit (L=0) for the ROM region PMP entry. This allows debuggers
running in machine mode to access ROM for setting breakpoints and
reading instructions while preserving userspace protection.

When PMP lock bits are set, they restrict access even in machine mode,
causing "unable to halt hart" errors with hardware debuggers like
OpenOCD. This option provides a surgical fix that only affects the ROM
region - NULL pointer guards and stack guards remain locked to catch
critical bugs during development.

The option integrates with existing PMP_NO_LOCK_GLOBAL configuration
using nested COND_CODE_1 macros and defaults to disabled for production
builds.

Fixes: zephyrproject-rtos/zephyr#82729

Signed-off-by: Alex Lyrakis <alex_gfd@hotmail.com>
2026-02-13 10:06:18 +01:00
Firas Sammoura
faa65388d3 arch: riscv: Allow z_riscv_fatal_error to return
The Zephyr kernel's generic `z_fatal_error()` function, which is
invoked by architecture-specific fatal error handlers, is not
guaranteed to be non-returning. For instance, it can return if an
essential thread aborts itself.

The RISC-V port's `z_riscv_fatal_error` function was previously
inconsistently marked as `FUNC_NORETURN`. This commit removes this
attribute to align with the core kernel behavior, allowing the
function to return if `z_fatal_error()` returns.

Specific changes include:

-   Removed `FUNC_NORETURN` from `z_riscv_fatal_error` declarations
    in `fatal.c` and `kernel_arch_func.h`.
-   Removed `CODE_UNREACHABLE` after the call to `z_fatal_error`
    within `z_riscv_fatal_error` as it can now return.
-   In `isr.S`, changed `tail z_riscv_fatal_error` to
    `call z_riscv_fatal_error` in the exception entry, followed by
    a jump to `check_reschedule` to handle the return path.
-   Added `CODE_UNREACHABLE` at call sites of `z_riscv_fatal_error`
    (e.g., in `z_riscv_fault`, `z_check_user_fault`,
    `arch_irq_spurious`) where the context ensures the call is
    effectively terminal.

Signed-off-by: Firas Sammoura <fsammoura@google.com>
2026-01-29 13:39:45 +01:00
Afonso Oliveira
ca062130c9 arch: riscv: call IMSIC secondary init on SMP boot
Invoke IMSIC secondary initialization during RISC-V SMP bring-up.

Signed-off-by: Afonso Oliveira <afonsoo@synopsys.com>
2026-01-26 14:16:22 +01:00
Fin Maaß
2cea7b0582 thead: riscv: use riscv,isa-extensions dt prop
use riscv,isa-extensions dt prop for riscv cpus.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
2026-01-26 14:15:18 +01:00
Fin Maaß
e2fd8e6de7 riscv: use riscv,isa-extensions dt prop
implement and use riscv,isa-extensions
dt prop, like in linux
https://www.kernel.org/doc/Documentation/devicetree/bindings/riscv/extensions.yaml
to set the riscv extentions.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
2026-01-26 14:15:18 +01:00
Fin Maaß
daf90f79ee arch: riscv: add dependencies to FLOAT_HARD
1. it reguires that there are floating point registers,
so the extention f is required. (zfinx uses the int regs instead)
2. RV32E doesn't supports hardware floating-point calling convention.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
2026-01-24 08:47:18 -06:00
Andy Lin
d807e39a2c arch: riscv: Add the support for Zbkb ISA extension
Introduce the missing flag to compile code with Zbkb extension,
which has already been supported by the GCC 12 in current SDK.

Signed-off-by: Andy Lin <andylinpersonal@gmail.com>
2026-01-23 13:51:55 +01:00
Peter Mitsis
3944b0cfc7 kernel: Extend thread user_options to 16 bits
Upgrades the thread user_options to 16 bits from an 8-bit value to
provide more space for future values.

Also, as the size of this field has changed, the values for the
existing architecture specific thread options have also shifted
from the upper end of the old 8-bit field, to the upper end of
the new 16-bit field.

Fixes #101034

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2026-01-22 08:40:17 +00:00
Benjamin Cabé
f64bb4bf1e arch: riscv: avoid the use of "sanity check" term
As per coding guidelines, "sanity check" must be avoided.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
2026-01-21 20:06:06 +01:00
Sylvio Alves
b05332abee arch: riscv: pmp: add SoC-specific region support
Add infrastructure for SoCs to define additional PMP regions
that need protection beyond the standard ROM region. This uses
iterable sections to collect region definitions at link time.

The PMP_SOC_REGION_DEFINE macro allows SoCs to register memory
regions with specific permissions. These regions become global
PMP entries shared between M-mode and U-mode.

Signed-off-by: Sylvio Alves <sylvio.alves@espressif.com>
2026-01-13 17:26:48 +01:00
Mathieu Choplain
36170c4530 arch: *: remove check for CONFIG_SOC_PER_CORE_INIT_HOOK
soc_per_core_init_hook() is usually called from arch_kernel_init() and
arch_secondary_cpu_init() which are C functions. As such, there is no need
to check for CONFIG_SOC_PER_CORE_INIT_HOOK since platform/hooks.h provides
a no-op function-like macro implementation if the Kconfig option is not
enabled.

Remove the Kconfig option check from all files.

Signed-off-by: Mathieu Choplain <mathieu.choplain-ext@st.com>
2026-01-07 19:39:53 +01:00
Firas Sammoura
7dc9e87f6c riscv: pmp: Add API to change region permissions at runtime
The new function 'z_riscv_pmp_change_permissions' provides a mechanism
to modify the Read, Write, and Execute (R/W/X) permissions of an
existing PMP region based on its memory attribute index.

Signed-off-by: Firas Sammoura <fsammoura@google.com>
2025-12-08 15:01:37 -05:00
Firas Sammoura
e0f2b4e354 riscv: pmp: Add support for unlocked global PMP entries
Adds the Kconfig option PMP_NO_LOCK_GLOBAL to remove the PMP Lock bit
usage. The global entry is an internal detail of the driver
implementation and should not be reflected in the user interface. This
allows the application to dynamically reconfigure the PMP entries
without requiring hard reset. This is essential for firmware that
performs an RO-to-RW jump. By keeping these system entries unlocked,
higher-privileged M-mode code can dynamically reconfigure memory
permissions during the secure handover process, which is not possible if
the entries are permanently locked during early boot.

Signed-off-by: Firas Sammoura <fsammoura@google.com>
2025-11-28 09:49:50 +00:00
Chris Friedt
27180d2fc5 arch: riscv + xtensa + x86: workaround needed for LLVM linker
Due to slight differences in the way that LLVM and GNU linkers work,
the call to `z_stack_space_get()` is not dead-stripped when linking
with `lld` but it is dead-stripped when linking with GNU `ld`.

The `z_stack_space_get()` function is only available when
`CONFIG_INIT_STACKS` and `CONFIG_THREAD_STACK_INFO` are defined.

The issue is reproducible (although requires building LLVM and
setting up some environment variables) and goes away with the proposed
workaround.

Signed-off-by: Robin Kastberg <robin.kastberg@iar.com>
Signed-off-by: Chris Friedt <cfriedt@tenstorrent.com>
2025-11-18 19:53:10 -05:00
Yong Cong Sin
3c5807f6ec arch: riscv: stacktrace: support stacktrace in early system init
Add support for stacktrace in dummy thread which is used to run
the early system initialization code before the kernel switches
to the main thread.

On RISC-V, the dummy thread will be running temporarily on the
interrupt stack, but currently we do not initialize the stack
info for the dummy thread, hence check the address against the
interrupt stack.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2025-11-18 17:38:22 -05:00
Firas Sammoura
f877417f0d tests: riscv: Implement unit tests for PMP memattr configuration and state
This commit implements a new unit test suite to validate the
integration of Device Tree memory attributes (`zephyr,memory-attr`)
with the RISC-V Physical Memory Protection (PMP) hardware.

The test suite includes:
1. **`test_pmp_devicetree_memattr_config`**: Verifies that the PMP
   Control and Status Registers (CSRs) are programmed correctly based
   on the memory regions defined with `zephyr,memory-attr` in the
   Device Tree. It iterates through the active PMP entries and
   asserts a match against the expected DT-defined regions.
2. **`test_riscv_mprv_mpp_config`**: Checks the initial state of the
   Machine Privilege Register Virtualization (MPRV) bit and Machine
   Previous Privilege (MPP) field in the `mstatus` CSR to ensure PMP
   is configured for correct privilege level switching during boot.
3. **`test_dt_pmp_perm_conversion`**: Validates the
   `DT_MEM_RISCV_TO_PMP_PERM` macro to ensure the conversion from
   Device Tree memory attribute flags to RISC-V PMP permission bits
   (R/W/X) is correct.

Signed-off-by: Firas Sammoura <fsammoura@google.com>
2025-11-17 09:25:01 -05:00
Firas Sammoura
80d34bbe0a riscv: pmp: Extract region address calculation to helper function
The logic to decode PMP addressing modes (**TOR**, **NA4**, **NAPOT**) into
physical start and end addresses was previously embedded in
`print_pmp_entries()`.

Extract this calculation into a new static helper function,
`pmp_decode_region()`, to significantly improve the readability and
modularity of the PMP debug printing code.

The new helper function is fully self-contained and exposes a defined API
for the PMP address decoding logic. This enables **direct reuse** in
**unit tests** (e.g., using **Ztest**) to verify the core address
calculation accuracy for all PMP modes and boundary conditions, independent
of the main PMP initialization or logging path.

Signed-off-by: Firas Sammoura <fsammoura@google.com>
2025-11-17 09:25:01 -05:00
Firas Sammoura
c011fddf95 riscv: pmp: Support custom entries from Device Tree for memory attributes
The Physical Memory Protection (PMP) initialization is updated to support
custom entries defined in the Device Tree (DT) using the `zephyr,memattr`
property, contingent on `CONFIG_MEM_ATTR` being enabled. A new function,
`set_pmp_mem_attr()`, iterates over DT-defined regions and programs PMP
entries in `z_riscv_pmp_init()`, allowing for early, flexible, and
hardware-specific R/W/X protection for critical memory areas. DT-based
entries are also installed in `z_riscv_pmp_kernelmode_prepare()` for
thread-specific configuration. The logic for the temporary PMP "catch-all"
entry is adjusted to account for new DT entries. Furthermore, the PMP
domain resync logic now masks user partition permissions against DT-defined
region permissions, preventing privilege escalation. `CONFIG_RISCV_PMP` is
updated to select `PMP_KERNEL_MODE_DYNAMIC` if `MEM_ATTR`. Finally, the
`pmp_cfg` array in `z_riscv_pmp_init()` is initialized to zero to prevent
writing uninitialized stack data to unused PMP entries.

Signed-off-by: Firas Sammoura <fsammoura@google.com>
2025-11-17 09:25:01 -05:00
Firas Sammoura
9fd456e4ab riscv: pmp: Fix pmp_addr index for per-CPU IRQ stack guards in SMP
When CONFIG_SMP is enabled, per-CPU IRQ stack guards are added. To prevent
unintended TOR (Top of Range) entry sharing, the PMP address entry
preceding each guard region in `pmp_addr` is marked with -1L.

The previously used index to access `pmp_addr` could become stale, as
additional PMP entries may be allocated after its initial calculation
but before the SMP loop for IRQ guards.

Signed-off-by: Firas Sammoura <fsammoura@google.com>
2025-11-17 09:25:01 -05:00
Firas Sammoura
c875c586b7 riscv: pmp: Separate global state for M-mode and U-mode regions
Split global PMP state variables (index and last address) into
mode-specific counterparts to correctly track the end of global PMP
ranges for both M-mode (kernel) and U-mode (userspace).

This ensures correct per-thread PMP initialization when configuring
mode-specific dynamic PMP entries.

Signed-off-by: Firas Sammoura <fsammoura@google.com>
2025-11-17 09:25:01 -05:00
Firas Sammoura
f6cec1c30f riscv: Add CONFIG_PMP_KERNEL_MODE_DYNAMIC
Introduce `CONFIG_PMP_KERNEL_MODE_DYNAMIC` to enable dynamic
configuration and activation of Machine mode PMP entries. This allows
PMP settings to be managed efficiently during transitions between
kernel and thread contexts.

Signed-off-by: Firas Sammoura <fsammoura@google.com>
2025-11-17 09:25:01 -05:00
Firas Sammoura
3b27d95f61 riscv: pmp: Rename PMP stackguard functions to kernelmode
Rename the `z_riscv_pmp_stackguard_*` functions to
`z_riscv_pmp_kernelmode_*`. This change better reflects that
these functions are used for general kernel mode PMP configuration,
not strictly limited to stack guard purposes.

Call sites in fatal.c, isr.S, and switch.S have been updated accordingly.

Signed-off-by: Firas Sammoura <fsammoura@google.com>
2025-11-17 09:25:01 -05:00
Afonso Oliveira
b9a15bf5c8 arch/riscv: Enable NMI delivery for SMRNMI hardware
Add option to enable NMI delivery on boot for SMRNMI hardware.

Changes:
- Add CONFIG_RISCV_SMRNMI_ENABLE_NMI_DELIVERY Kconfig option
- Define SMRNMI CSRs in arch/riscv/include/csr.h
- Set NMIE bit during boot to enable NMI delivery

SMRNMI hardware generates but doesn't deliver NMIs when NMIE=0 (default).
This causes twister test failures and prevents handling of critical
hardware events like watchdog NMIs and ECC errors.

Setting NMIE=1 enables NMI delivery, but note that this implementation
only sets the enable bit - it does not provide full SMRNMI support
(no mnret instruction handling, no RNMI handlers). Users must implement
proper RNMI handlers in SoC-specific code to avoid undefined behavior.

Signed-off-by: Afonso Oliveira <afonsoo@synopsys.com>
2025-11-17 09:23:11 -05:00
Fin Maaß
402c66a5e1 arch: riscv: vexriscv: add VexRiscv cache driver
add driver for VexRiscv CPU cache controller.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
2025-11-13 20:41:07 -05:00
Afonso Oliveira
0cdc464285 riscv: add Smcsrind indirect CSR access extension support
Add support for the RISC-V Smcsrind extension, which provides
indirect access to CSRs through the MISELECT and MIREG registers.

Changes:
- Added CONFIG_RISCV_ISA_EXT_SMCSRIND Kconfig option
- Implemented 4 helper functions for indirect CSR access:
  * icsr_read/write - basic access
  * icsr_read_set/clear - bit manipulation
- Defined 7 CSR registers (MISELECT, MIREG, MIREG2-6)

This is a CSR-only extension that does not require any compiler
support or march flags. The helper functions compile to standard
CSR instructions and work with any toolchain that supports Zicsr.

Primary use case: RISC-V AIA (Advanced Interrupt Architecture)
uses indirect CSRs to access IMSIC (Incoming MSI Controller)
registers.

Signed-off-by: Afonso Oliveira <afonsoo@synopsys.com>
2025-11-13 20:38:38 -05:00
Firas Sammoura
8a23eff9f6 tests: riscv: Add unit tests for clearing unlocked PMP entries
Adds a new test suite to verify the behavior of `riscv_pmp_clear_all()`.
These tests ensure that the function correctly clears all unlocked PMP
entries while preserving any entries that are locked.

Signed-off-by: Firas Sammoura <fsammoura@google.com>
2025-11-13 20:36:55 -05:00
Firas Sammoura
9dc3906cd3 arch: riscv: Add z_riscv_pmp_clear_all() to reset PMP entries
Introduce the new function `riscv_pmp_clear_all()` to reset the Physical
Memory Protection (PMP) configuration.

This function iterates through all configured PMP slots. For each entry,
it writes 0x0 to the entry's 8-bit configuration register. This action
attempts to clear all fields, including the Address Matching Mode (A) bits
(setting the region type to OFF), the permission bits (R, W, X), and
the Lock (L) bit.

According to the RISC-V specification, any writes to the configuration
or address registers of a locked PMP entry are ignored. Thus, locked
entries will remain unchanged, while all non-locked entries will be
effectively disabled and their permissions cleared.

The function ensures it operates in Machine mode with MSTATUS.MPRV = 0
and MSTATUS.MPP = M-mode before modifying any PMP Control and Status
Registers (CSRs).

This provides a mechanism to clear all non-locked PMP regions,
returning them to a default disabled state. The function declaration is
exposed in the `include/zephyr/arch/riscv/pmp.h` header file, making it
available for inclusion and use by external modules.

It is recommended for firmware to call this function before transitioning
from a Read-Only (RO) stage to a Read-Write (RW) stage. This ensures
that any PMP settings established during the RO phase, which might no
longer be appropriate, are cleared, providing a clean and secure base
PMP configuration for the RW firmware.

Signed-off-by: Firas Sammoura <fsammoura@google.com>
2025-11-13 20:36:55 -05:00
Camille BAUD
8b2c75a2fb arch: riscv: thead: Fix range size
Only partial range operation was done, fix this

Signed-off-by: Camille BAUD <mail@massdriver.space>
2025-11-05 15:39:02 -05:00
Firas Sammoura
2196d2a77d Revert "riscv: pmp: Add helper to write PMP configuration CSRs"
This reverts commit 9482f8df02.

Signed-off-by: Firas Sammoura <fsammoura@google.com>
2025-11-04 13:56:09 -05:00
Anas Nashif
303af992e5 style: fix 'if (' usage in cmake files
Replace with 'if(' and 'else(' per the cmake style guidelines.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-10-29 11:44:13 +02:00
Łukasz Stępnicki
a825e014d8 arch: riscv: core: vector_table alignement fix
For RISCV vector table needs to be aligned depending on
CONFIG_ARCH_IRQ_VECTOR_TABLE_ALIGN. This was missing
when using LTO making issues when direct ISR were in use.

Signed-off-by: Łukasz Stępnicki <lukasz.stepnicki@nordicsemi.no>
2025-10-28 17:41:48 +02:00
Fin Maaß
24669df207 arch: riscv: use RISCV_ISA_EXT_F to set CPU_HAS_FPU
use CONFIG_RISCV_ISA_EXT_F to set CONFIG_CPU_HAS_FPU.
Same for CONFIG_RISCV_ISA_EXT_D and
CONFIG_CPU_HAS_FPU_DOUBLE_PRECISION.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
2025-10-24 13:21:47 -04:00
Fin Maaß
3be1b9ca7a arch: riscv: use RISCV_ISA_RV64I to set 64BIT
use CONFIG_RISCV_ISA_RV64I to set CONFIG_64BIT.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
2025-10-24 13:21:47 -04:00
Yong Cong Sin
643e09febf arch: riscv: streamline fatal handling code
`CONFIG_EXTRA_EXCEPTION_INFO` that was added in #78065 doesn't
seem necessary, as we were already storing and printing the
callee-saved-registers before that. All `CONFIG_EXTRA_EXCEPTION_INFO`
does in RISCV is to add an additional `_callee_saved_t *csf` in the
`struct arch_esf`, which overhead is negligible to what's being enabled
by `CONFIG_EXCEPTION_DEBUG`.

Let's remove `CONFIG_EXTRA_EXCEPTION_INFO`, and have that extra
`_callee_saved_t *csf` in the `struct arch_esf` as long as
`CONFIG_EXCEPTION_DEBUG` is enabled.

TL;DR: it doesn't make sense to not enable `CONFIG_EXTRA_EXCEPTION_INFO`
when `CONFIG_EXCEPTION_DEBUG` is enabled, so let's merge them.

Then, since `*csf` is always available in the `struct arch_esf` when
`CONFIG_EXCEPTION_DEBUG=y`, we can simply rely on that pointer in
`z_riscv_fatal_error()` instead of an additional argument in
`z_riscv_fatal_error_csf()`, rendering the latter redundant and thus
can be removed.

Additionally, save the callee-saved registers before jumping to
to `z_riscv_fault()`, so that callee-saved-registers are printed on
generic CPU exception as well.

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2025-10-24 08:51:15 -07:00