Commit graph

6,290 commits

Author SHA1 Message Date
Adam Szczygieł
f4747547d9 arch: ISR table size optimization
Allow to use a switch-case instead of an array holding ISR entries.

When most of IRQs are not used, they share the same, default entry.
It results in most of the ISR array entries being identical duplicates.

This change allows to use dynamically generated function (after first
linker pass) that uses switch-case instead of a full array.
Default entries are handled only once, in a default section.
Used IRQs have their own case sections.
This can help reduce binary size.

Signed-off-by: Adam Szczygieł <adam.szczygiel@nordicsemi.no>
2026-04-17 12:35:34 +01:00
Marcin Niestroj
9fa9dfefe4 nsi: move nsos_fcntl to more generic nsi_fcntl
This will allow to reuse fcntl middle layer in other parts besides NSOS,
such as planned Native Simulator host FS mounting.

Signed-off-by: Marcin Niestroj <m.niestroj@emb.dev>
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2026-04-17 10:40:50 +02:00
Carlo Caione
acc53653b4 arm64: mm: increase MAX_XLAT_TABLES for USERSPACE && TEST
Since commit 0026a5610ac ("arm64: mm: use identity mapping for device
MMIO"), device_map() creates identity mappings (VA = PA) instead of
allocating virtual addresses from a contiguous pool. Each device at a
distinct 2MB-aligned physical address now requires its own L3 page
table, increasing the total number of translation tables needed.

Bump the USERSPACE && TEST default from 24 to 28 to accommodate the
additional tables required by identity-mapped device MMIO.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2026-04-15 17:17:55 -04:00
Guennadi Liakhovetski
33d43d0933 xtensa: ptables: fix dangling memory domains
When a memory domain is freed on Xtensa, it also has to be removed
from the global domain list. Leaving it on the list can cause
use-after-free exceptions.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2026-04-15 05:51:51 -04:00
Luca Burelli
69d479c51f llext: add support for ARM TLS LE32 relocation
This relocation is used by the ARM TLS code to access thread local
variables. It is a simple absolute relocation that adds the symbol's
offset to the value at the location. This allows the code to access
thread local variables using a fixed offset from the thread pointer,
which is determined at runtime.

Signed-off-by: Luca Burelli <l.burelli@arduino.cc>
2026-04-15 05:50:57 -04:00
Christoph Busold
28ceaaafbd arch: riscv: Support up to 64 PMP registers
The official version of the RISC-V privileged architecture
specification extends the number of supported PMP registers to 64.

Signed-off-by: Christoph Busold <cbusold@qti.qualcomm.com>
2026-04-15 05:50:45 -04:00
Appana Durga Kedareswara rao
355cb6c663 arch: arm64: coredump: add FP and SP registers for correct GDB backtraces
The ARM64 coredump arch block did not include FP (x29) and SP,
making GDB unable to unwind the stack. The GDB stub also
misinterpreted SPSR as SP (tu[20] mapped to SP_EL0), producing
corrupted stack pointer values and broken backtraces.
Bump the arch block version to v2 (24 registers, 192 bytes)
adding FP and SP after the existing 22 registers. Update the
GDB stub to auto-detect v1 vs v2 blocks by payload size and
correctly map SPSR (skip), ELR (PC), FP (x29), and SP.

When CONFIG_ARM64_SAFE_EXCEPTION_STACK is enabled and the
exception originated from EL0, use the saved esf->sp (original
sp_el0 stored by the exception entry code) instead of computing
it from the ESF address, since the exception handler may be
running on a separate safe stack.

Fixes #99054

Signed-off-by: Anirudha Sarangi <anirudha.sarangi@amd.com>
Signed-off-by: Appana Durga Kedareswara rao <appana.durga.kedareswara.rao@amd.com>
2026-04-15 05:42:40 -04:00
Carlo Caione
229815dbd8 arm64: mm: use identity mapping for device MMIO
On ARM64, Zephyr uses identity mappings (VA = PA) for kernel code, data and
boot-time device regions. The MMU fully supports address translation but
Zephyr uses it primarily for access permission enforcement.

There are currently two independent paths for mapping device MMIO regions:

1. SoC-level mmu_regions.c files use MMU_REGION_FLAT_ENTRY() to create
   identity mappings (VA = PA) directly in the page tables at boot. This
   bypasses the kernel's virtual memory tracking entirely. SoC maintainers
   must manually list peripherals in mmu_regions.c for drivers that do not
   use the device MMIO API (e.g. most existing drivers) or cannot use it
   (e.g. the GIC, which is not a regular driver).

2. The device MMIO API (device_map()) goes through k_mem_map_phys_bare(),
   which allocates a virtual address from the SRAM range and maps (VA !=
   PA) device registers there. Mapping device MMIO into the SRAM virtual
   address space is nonsensical: it conflates device registers with memory,
   wastes virtual address pool space, and produces addresses that bear no
   relation to the hardware.

The CONFIG_KERNEL_DIRECT_MAP mechanism already supports identity mapping
through k_mem_map_phys_bare() with the K_MEM_DIRECT_MAP flag, but it
requires each board defconfig to enable the Kconfig and each driver to
explicitly pass the flag.

Make identity-mapped device MMIO automatic on ARM64:

1. ARM64 CPU_CORTEX_A selects KERNEL_DIRECT_MAP when MMU is enabled. This
   eliminates the need for per-board defconfig opt-in.

2. device_map() automatically injects K_MEM_DIRECT_MAP when
   CONFIG_KERNEL_DIRECT_MAP is enabled. This is transparent to
   drivers so no per-driver changes needed. The flag is gated on
   CONFIG_KERNEL_DIRECT_MAP rather than CONFIG_ARM64, keeping it
   architecture-agnostic.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2026-04-15 05:40:30 -04:00
Archilis Wang
9edd557451 arch: arm64: add thread-based stack unwinding
Implement thread-based unwinding to support the 'kernel thread unwind'
shell command on ARM64. This update ensures that 'thread' defaults to
'_current' when NULL, complying with the arch_stack_walk() API contract.

To enhance security, add stack bounds validation using stack_info,
and TLS pointers of the target thread. If these are not available, the
logic correctly falls back to is_address_mapped() to ensure robustness
during the unwinding process.

Signed-off-by: Archilis Wang <awm02289@gmail.com>
2026-04-14 22:38:19 -04:00
Mirai SHINJO
233739af62 arch: openrisc: only compile irq_offload when enabled
'irq_offload.c' should be only compiled when 'CONFIG_IRQ_OFFLOAD'
Kconfig option is enabled.

Signed-off-by: Mirai SHINJO <oss@mshinjo.com>
2026-04-14 22:34:23 -04:00
Anas Nashif
4d5f470290 soc: arch: select SCHED_IPI_SUPPORTED if SMP
Fix kconfig warning when building anything SMP.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2026-04-14 22:31:16 -04:00
Anas Nashif
85ca9bb992 kernel: move smp code into smp/
Isolate SMP code into own folder.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2026-04-14 22:31:16 -04:00
Anas Nashif
d8a1960c8b kernel: reorg mem domain kconfig
Reorganize memory domain Kconfig and move it under userspace/.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2026-04-14 22:31:16 -04:00
Anas Nashif
eb294b7a1e kernel: move userspace code to own folder
Isolate userspace code into userspace/.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2026-04-14 22:31:16 -04:00
Daniel Leung
bac294c90f xtensa: remove mem_manage.c
The custom memory range checks should be implemented in SoC or
board level as these checsk are SoC/board specific. So remove
it from the architecture level.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2026-04-14 22:22:23 -04:00
Nicolas Pitre
89dac23a5f arch: arm64: make MMU page size configurable via Kconfig
Add ARM64_PAGE_SIZE Kconfig choice allowing 4KB, 16KB and 64KB
page sizes. The MMU code already derived all constants from
PAGE_SIZE_SHIFT so most of the infrastructure was ready.

Changes:
- Add ARM64_PAGE_SIZE choice (4KB default, 16KB, 64KB) in Kconfig
- Derive PAGE_SIZE_SHIFT from CONFIG_MMU_PAGE_SIZE in mmu.h
- Select proper TCR granule bits (TG0/TG1) per page size in mmu.c
- Round ARCH_THREAD_STACK_RESERVED up to page alignment so that
  the user-accessible stack buffer starts on a page boundary
- Fix MEM_REGION_ALLOC in mem_protect test to use CONFIG_MMU_PAGE_SIZE

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2026-04-14 22:15:56 -04:00
Benjamin Cabé
c868f1197b arch: drop Synopsis / Designware from ARC's full name
Drop Synopsis / Designware from ARC's full name and just go with "ARC"
which should be obvious enough of a name when listed alongside other
processor architectures.ull name for

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
2026-04-14 10:37:34 -05:00
Guennadi Liakhovetski
78dcc5e7ce xtensa: (cosmetic) fix a Kconfig entry
select X if Y

in Kconfig entry for Y doesn't make sense. Remove it.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2026-04-07 15:37:52 -04:00
Nicolas Pitre
c852f0cf9c arch: arm64: fix crash on SMP secondary CPUs when PAC is enabled
arch_secondary_cpu_init() never returns (it ends with fn(arg) into the
scheduler) but its definition lacks FUNC_NORETURN. The compiler
generates a PACIASP/AUTIASP pair and turns the final fn(arg) into a
tail-call: AUTIASP followed by BR. The AUTIASP causes a PAC
authentication failure (FPAC exception) on secondary CPUs.

Fix by marking the definition FUNC_NORETURN with CODE_UNREACHABLE,
matching the extern declaration. The compiler then generates a plain
BLR without the AUTIASP epilogue.

Also fix the function signature to take no arguments, matching the
extern declaration and actual call sites, and move both
arch_secondary_cpu_init() and z_arm64_mm_init() declarations into
boot.h instead of scattering extern declarations across source files.

Also remove the dead arch_cache_init() call in z_arm64_secondary_prep_c()
that was placed after the noreturn call and could never execute. It is
absent from the primary CPU path in z_prep_c() and the implementation
is empty on arm64 anyway.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2026-04-07 11:35:44 -05:00
Joakim Tjernlund
67b5d58c26 arm64: Set DS bit if GIC_SINGLE_SECURITY_STATE
This need to be set while in secure mode so do so in while in EL3

Signed-off-by: Joakim Tjernlund <joakim.tjernlund@infinera.com>
2026-04-07 11:34:49 -05:00
Daniel Leung
04df4a4aa2 xtensa: correctly flush stack when creating new thread
On a cache incoherent system, we need to make sure the caching
of stack space is properly flushed to memory when creating new
threads. This is especially important if the thread starts
running on the CPU other than the one initializing the thread.
Without flushing, the other CPU would not have the up-to-date
data to correctly start the thread.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2026-04-03 14:51:56 -05:00
Mirai SHINJO
02b5ab3c9e arch: riscv: remove unused stdio.h include
arch/riscv/core/thread.c does not use any stdio symbols.
Therefore, remove the unused include.

No functional change.

Signed-off-by: Mirai SHINJO <oss@mshinjo.com>
2026-04-03 23:13:10 +09:00
Holt Sun
3227ac7d55 arch: arm: mpu: fix non-ARCH cache cleanup
The generic ARM MPU nocache-memory cleanup path assumes Cortex-M
SCB dcache support whenever it needs to clean and invalidate
cache state before programming MPU regions.

That is correct for integrated ARCH_CACHE systems, but not for
cache backends such as NXP LMEM on RT11xx CM4 targets. Those
targets can select CPU_HAS_DCACHE and NOCACHE_MEMORY while using
a non-ARCH cache backend, which makes the direct SCB dcache
symbols unavailable and breaks builds in z_arm_mpu_init().

Keep the direct CMSIS SCB_CleanInvalidateDCache() call under the
ARCH_CACHE guard — since we already test SCB->CCR the integrated
cache controller is known to be present — and use the generic
cache API for other cache backends.  This preserves the existing
integrated-cache behavior while allowing non-ARCH cache backends
to participate in the same MPU cleanup path.

Signed-off-by: Holt Sun <holt.sun@nxp.com>
2026-04-01 15:11:35 -05:00
Benjamin Cabé
0bb2f6ee24 Revert "arch: openrisc: do not enableL TLS support [REVERT ME]"
This reverts commit be388896e0.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
2026-03-31 13:56:17 -05:00
Daniel Leung
23054a97f4 kernel: dynamic stack to cached area if coherence
With kernel coherence enabled, it is possible that the stack has
been allocated on uncached area. This has implications on
performance as memory access is not cached.

This adds a kconfig to force the indicated stack pointer of
the allocated thread stack object to be in cached area.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2026-03-31 11:45:30 -04:00
Camille BAUD
26af854046 arch: riscv: Fix RISC-V ECALLs
You need to only keep the exception number to compare it specifically.

Signed-off-by: Camille BAUD <mail@massdriver.space>
2026-03-23 15:02:26 -05:00
Tony Han
dc90a8d6af arch: arm: core: add Kconfig and CMakeLists.txt for ARM9 support
Add or update Kconfig and CMakeLists.txt files for supporting ARM9
CPUs (mainly focus on ARM926EJ-S).

Signed-off-by: Tony Han <tony.han@microchip.com>
2026-03-23 12:27:55 -05:00
Benjamin Cabé
0ae4761b4c arch: archs.yml: set OpenRISC full name
Add the full name "OpenRISC" for the OpenRISC architecture so it shows
up correctly in the documentation.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
2026-03-23 10:19:21 +01:00
Rick Tsao
109ae98c1c arch: riscv: custom: add Andes StackSafe support for custom stack guard
Implement the custom stack guard using the Andes StackSafe hardware
stack protection. It triggers an exception on stack overflow when the
stack pointer exceeds the configured limit.

Signed-off-by: Rick Tsao <rick592@andestech.com>
2026-03-21 07:51:15 -05:00
Rick Tsao
a88b3c5453 arch: riscv: add custom stack guard
Add architecture-level support for a custom stack guard on RISC-V,
preventing stack overflow at the hardware level.

This framework allows vendors to implement the custom stack guard
using their own vendor-specific stack protection hardware, providing
flexibility for different RISC-V cores.

A new config option, CUTOM_STACK_GUARD, allows users to enable this
stack guard on supported RISC-V cores.

Signed-off-by: Rick Tsao <rick592@andestech.com>
2026-03-21 07:51:15 -05:00
Anas Nashif
be388896e0 arch: openrisc: do not enableL TLS support [REVERT ME]
Disable TLS while we wait for toolchain update in the new Zephyr SDK.

See https://github.com/zephyrproject-rtos/sdk-ng/pull/1106

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2026-03-21 07:50:57 -05:00
Keith Packard
fbf7832153 arch/openrisc: Add THREAD_LOCAL_STORAGE support
Clear the TLS base pointer (r10) in arch_kernel_init.
Allocate the TLS area in arch_tls_stack_setup.
Set the TLS base pointer register (r10) in arch_new_thread.
Set ARCH_HAS_THREAD_LOCAL_STORAGE for config OPENRISC.

Signed-off-by: Keith Packard <keithp@keithp.com>
2026-03-21 07:50:57 -05:00
Joel Holdsworth
76def70bed arch: Added initial OpenRISC architecture port
This patch adds support for the OpenRISC 1000 (or1k) architecture: a
MIPS-like open hardware ISA which was first introduced in 2000.

The thread switching implementation uses the modern Zephyr thread "switch"
architecture.

Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
2026-03-21 07:50:57 -05:00
Jyri Sarha
ee0cc5a620 arch: xtensa: Add XTENSA_BACKTRACE_EXCEPTION_DUMP_HOOK Kconfig option
Add XTENSA_BACKTRACE_EXCEPTION_DUMP_HOOK Kconfig option for sending
backtrace through exception dump hook.

This commit also disables the printk backtrace dumping if Kconfig
option EXCEPTION_DUMP_HOOK_ONLY is set.

Signed-off-by: Jyri Sarha <jyri.sarha@linux.intel.com>
2026-03-20 18:20:27 +09:00
Jyri Sarha
8655e64cae arch: exception: Add Kconfig EXCEPTION_DUMP_HOOK_ONLY
Add Kconfig option EXCEPTION_DUMP_HOOK_ONLY. If the option is selected
the exception dumps are sent only to the exception hook. Sometimes even
the attempt to log in the exception routine may hang the system.

Signed-off-by: Jyri Sarha <jyri.sarha@linux.intel.com>
2026-03-20 18:20:27 +09:00
Jyri Sarha
9446b09ff2 arch: xtensa: Use exception dump hook helpers in exception dumping
The new exception dump hooks provides helper function for draining or
flushing the accumulated dump data. These helpers are for the backend
to deal intelligently with often excessive amount of data for limited
bandwidth interfaces.

These calls are placed specifically for SOF application, but AFAIK SOF
is the most widely used Zephyr application running on Xtensa.

The helpers do not have any effect if CONFIG_EXCEPTION_DUMP_HOOK is
not set.

Signed-off-by: Jyri Sarha <jyri.sarha@linux.intel.com>
2026-03-20 18:20:27 +09:00
Jyri Sarha
499cdcd51c arch: exception: Add hooks for delivering exception dumps
Add hooks for delivering exception dump prints over a specialized
interface. If CONFING_EXCEPTION_DUMP_HOOK=y then a client program can
set function pointers for printing, flushing, and draining exception
generated prints.

These hooks were implemented for SOF usage, but should be generic
enough to implement alternative exception reporting on any platform.

Signed-off-by: Jyri Sarha <jyri.sarha@linux.intel.com>
2026-03-20 18:20:27 +09:00
Peter Marheine
742812e580 arch: riscv: optimize mcause decoding on interrupt
This reduces the typical number of instructions executed on interrupt by
one and saves an additional 3-4 instructions on syscall, by two
related optimizations.

 * The top bit of `mcause` indicates an interrupt, and the RISC-V ISA
   specification suggests checking the sign of `mcause` to separate
   interrupts from exceptions. Doing so saves one instruction in
   generating an intermediate value to compare against and comparing to
   zero instead. In the exception branch, this doesn't modify the
   temporary value and saves one instruction in not needing to reload
   with the value of `mcause`.
 * Loading a register with `CONFIG_RISCV_MCAUSE_EXCEPTION_MASK` and
   masking `mcause` with that requires two instructions at minimum, and
   three if the mask is too large to fit into a single instruction.
   Since the first optimization leaves the temporary value of `mcause`
   unmodified and it is known that the interrupt bit is clear after the
   branch to `is_interrupt`, reloading and masking the value of `mcause`
   can be skipped entirely.

Signed-off-by: Peter Marheine <pmarheine@chromium.org>
2026-03-19 14:48:34 -05:00
Peter Marheine
1d67c392a7 arch: riscv: remove unused RISCV_SOC_EXCEPTION_FROM_IRQ
This option was formerly enabled by sy1xx, but all supported socs now
appear to use the standard behavior so this support can be removed.

Signed-off-by: Peter Marheine <pmarheine@chromium.org>
2026-03-19 14:48:34 -05:00
Duy Nguyen
af9f6f56c6 soc: Add FPU config for RXv2 and RXv3
The RXv2 and RXv3 core support FPU in CPU.
This enable FPU instruction build for the RX140, RX261 and RX26T

Signed-off-by: Duy Nguyen <duy.nguyen.xa@renesas.com>
2026-03-19 15:27:18 +09:00
Andy Lin
6cb74ad968 arch: riscv: Add -msave-restore option to reduce code footprint
Add `-msave-restore` option to reduce the code footprint
of function prologue and epilogue.

Signed-off-by: Andy Lin <andylinpersonal@gmail.com>
2026-03-16 10:07:57 -04:00
Lauren Murphy
8bcc333e65 llext: custom sections for heap
Places heaps in custom sections with default placements.

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2026-03-16 10:07:20 -04:00
Jimmy Zheng
9e4def9ba1 arch: riscv: pmp: fix PMP stack guard failure when switch through ecall
When CONFIG_RISCV_ALWAYS_SWITCH_THROUGH_ECALL and CONFIG_PMP_STACK_GUARD
are enabled, the first context switch enables the stack guard
(mstatus.MPRV and MPP) in is_kernel_syscall. However, there is no proper
catch-all PMP entry during early kernel initialization.

This change uses CONFIG_PMP_KERNEL_MODE_DYNAMIC (selected by
CONFIG_MEM_ATT, CONFIG_PMP_NO_LOCK_GLOBAL, and CONFIG_PMP_STACK_GUARD) to
configure a catch-all PMP entry in pmp initialization.

Although a catch-all entry is not required when
CONFIG_RISCV_ALWAYS_SWITCH_THROUGH_ECALL is disabled, using it keeps the
PMP setup simpler and more consistent.

Signed-off-by: Jimmy Zheng <jimmyzhe@andestech.com>
2026-03-16 10:06:11 -04:00
Appana Durga Kedareswara rao
bb3795b75f arch: arm: cortex_a_r: align reset entry to 32 bytes for ARMv8-R RVBAR
ARMv8-R AArch32 cores determine the CPU start address on reset from
RVBAR (Reset Vector Base Address Register), which only stores bits
[31:5] — bits [4:0] are RES0.  Any firmware or boot-loader that
programs RVBAR from the ELF entry point will silently truncate
a non-aligned address to a 32-byte boundary, causing the CPU to
begin executing at the wrong location.

Whether __start lands on a 32-byte boundary depends on the size of
code sections placed before it, which changes with Kconfig options.
This makes the failure non-deterministic: a build may work today and
break after enabling an unrelated feature like logging.

Force 32-byte alignment on z_arm_reset/__start for ARMv8-R so the
entry point survives RVBAR truncation on any SoC.

Signed-off-by: Appana Durga Kedareswara rao <appana.durga.kedareswara.rao@amd.com>
2026-03-13 16:34:05 +01:00
Mohamed Moawad
c0304cb3ba arc: mpu: Fix race condition in MPUv6 buffer validation
Add interrupt locking to arc_core_mpu_buffer_validate() to be atomic.

The function iterates through MPU regions using bank selection, which
requires multiple register accesses. Without interrupt protection, an
interrupt or context switch during iteration can corrupt the bank
selection state, causing incorrect region lookups and spurious access
denials.

Signed-off-by: Mohamed Moawad <moawad@synopsys.com>
2026-03-13 14:44:50 +01:00
Jisheng Zhang
f0df349808 arch: arm: implement EHABI walk_stackframe
Implement walk_stackframe() according to EHABI(Exception Handling ABI) [1]
Then implement arch_stack_walk() and z_arm_unwind_stack() based on
walk_stackframe. After that, hook the z_arm_unwind_stack() to
z_arm_fatal_error() so that we can unwind the stack during fatal error.

Tested with tests/arch/common/stack_unwind and enabling SYMTAB and
EXTRA_EXCEPTION_INFO:

*** Booting Zephyr OS build v4.3.0-3078-g23892b038f6a ***
Hello World! xxx
1: func1
2: func2
3: func1
4: func2
5: func1
6: func2
E: r0/a1:  0x00000003  r1/a2:  0x300000e8  r2/a3:  0x300000e8
E: r3/a4:  0x00000003 r12/ip:  0x00000000 r14/lr:  0x100011eb
E:  xpsr:  0x21000000
E: r4/v1:  0x00000006  r5/v2:  0x10009250  r6/v3:  0x00000000
E: r7/v4:  0x00000000  r8/v5:  0x00000000  r9/v6:  0x00000000
E: r10/v7: 0x00000000  r11/v8: 0x00000000    psp:  0x30000f58
E: EXC_RETURN: 0x0
E: Faulting instruction address (r15/pc): 0x100011fe
E: call trace:
E:      0: lr: 0x100011fe [func2+0x21]
E:      1: lr: 0x10001233 [func1+0x16]
E:      2: lr: 0x10001205 [func2+0x28]
E:      3: lr: 0x10001233 [func1+0x16]
E:      4: lr: 0x10001205 [func2+0x28]
E:      5: lr: 0x10001233 [func1+0x16]
E:      6: lr: 0x1000125d [main+0x10]
E:      7: lr: 0x10002cb5 [bg_thread_main+0x20]
E:
E: >>> ZEPHYR FATAL ERROR 3: Kernel oops on CPU 0
E: Current thread: 0x300000e8 (main)
E: Halting system

NOTE: cortex_a_r's walk_stackframe() works too, but extra_info.callee
is NULL during oops because the z_arm_svc doesn't save it, see below
comment in swap_helper.S or switch.S:

/* Zero callee_regs and exc_return (only used on Cortex-M) */
    mov r1, #0
    mov r2, #0
    bl z_do_kernel_oops

So the cortex_a_r's k_oops() can't unwind the stack now. For safe
reason, let's enable ARCH_HAS_STACKWALK for only CPU_CORTEX_M now.

Link: https://github.com/ARM-software/abi-aa/blob/main/ehabi32/ehabi32.rst [1]
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
2026-03-12 13:59:45 -05:00
Jisheng Zhang
fc5a1f3542 arch: arm: setup exc_return
We will make use of the .exc_return member during walk_stackframe() to
know whether we have extended stack or standard stack.

Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
2026-03-12 13:59:45 -05:00
Jisheng Zhang
13b3dfdfcd arch: arm: guard arch_syscall_oops() with CONFIG_USERSPACE
The arch_syscall_oops() is only used when CONFIG_USERSPACE=y.

Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
2026-03-12 13:59:45 -05:00
Daniel Leung
93a25f7b24 xtensa: set is_fatal_error before stack bound check
is_fatal_error is used to determine whether an exception is
a fatal one. In the default switch case for exception handling,
is_fatal_error needs to be set true. However, setting this
variable was done after stack bound check. So if stack bound
check fails, is_fatal_error is never set. So set the variable
earlier before the stack bound check.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2026-03-12 09:20:28 -05:00
Jisheng Zhang
9842b062bb cpuidle: optimize out weak stub function call for !TRACING
For !TRACING, most arch_cpu_idle and arch_cpu_atomic_idle implementation
relies on the fact that there's weak stub implementations in
subsys/tracing/tracing_none.c, this works, but the arch_cpu_idle sits in
hot code path, so we'd better to make it as efficient as possible.

Take the riscv implementation for example,
Before the patch:

80000a66 <arch_cpu_idle>:
80000a66:	1141                	addi	sp,sp,-16
80000a68:	c606                	sw	ra,12(sp)
80000a6a:	37c5                	jal	80000a4a <sys_trace_idle>
80000a6c:	10500073          	wfi
80000a70:	3ff1                	jal	80000a4c <sys_trace_idle_exit>
80000a72:	47a1                	li	a5,8
80000a74:	3007a073          	csrs	mstatus,a5
80000a78:	40b2                	lw	ra,12(sp)
80000a7a:	0141                	addi	sp,sp,16
80000a7c:	8082                	ret

NOTE: the sys_trace_idle and sys_trace_idle_exit are just stubs when
!TRACING

after the patch:
80000a62 <arch_cpu_idle>:
80000a62:	10500073          	wfi
80000a66:	47a1                	li	a5,8
80000a68:	3007a073          	csrs	mstatus,a5
80000a6c:	8082                	ret

Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
2026-03-11 23:17:29 -04:00