This implements arch_thread_priv_stack_space_get() so this can
be used to figure out how much privileged stack space is used.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the bits to initialize the privileged stack for
each thread during thread initialization. This prevents
information leaking if the thread stack is reused, and
also aids in calculating stack space usage during system
calls.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This implements arch_thread_priv_stack_space_get() so this can
be used to figure out how much privileged stack space is used.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the bits to initialize the privileged stack for
each thread during thread initialization. This prevents
information leaking if the thread stack is reused, and
also aids in calculating stack space usage during system
calls.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Only set the privileged stack pointer for thread stacks, but
nullify the pointer for kernel-only stacks, as these stacks
do not have the reserved space. The psp pointer may point to
arbitrary memory in this case if stack is not big enough.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds a new arch_thread_priv_stack_space_get() interface for
each architecture to report privileged stack space usage. Each
architecture will need to implement this function as each arch
has their own way of defining privileged stacks.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Before commit baa70d8d36 ("arch/arm64/mmu: fix page table reference
counting part 2") it was possible to perform a partial unmap of a block
mapping. Restore that ability and provide a test case to validate it.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This provides memory mappings with the ability to be initialized in their
paged-out state and be paged in on demand. This is especially nice for
anonymous memory mappings as they no longer have to allocate all memory
at mem_map time. This also allows for file mappings to be implemented by
simply providing backing store location tokens.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Having `CONFIG_EXCEPTION_STACK_TRACE_SYMTAB` to select the
`CONFIG_SYMTAB` or to explicitly not print the symbol name
during exception stack unwind seems unnecessary, as the extra
code to print the symbol name is negligible when compared with
the symbol table, so just remove it.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
Currently it supports `esf` based unwinding only.
Then, update the exception stack unwinding to use
`arch_stack_walk()`, and update the Kconfigs & testcase
accordingly.
Also, `EXCEPTION_STACK_TRACE_MAX_FRAMES` is unused and
made redundant after this change, so remove it.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
Currently it supports `esf` based unwinding only.
Then, update the exception stack unwinding to use
`arch_stack_walk()`, and update the Kconfigs & testcase
accordingly.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
This commit introduces a new ARCH_STACKWALK Kconfig which
determines if the `arch_stack_walk()` is available should the
arch supports it.
Starting from RISCV, this will be able to converge the exception
stack trace implementation & stack walking features. Existing
exception stack trace implementation will be updated later.
Eventually we will end up with the following:
1. If an arch implements `arch_stack_walk()`
`ARCH_HAS_STACKWALK` should be selected.
2. If the above is enabled, `ARCH_SUPPORTS_STACKWALK` indicates
if the dependencies are met for arch to enable stack walking.
This Kconfig replaces `<arch>_EXCEPTION_STACK_TRACE`
2. If the above is enabled, then, `ARCH_STACKWALK` determines
if `arch_stack_walk()` should be compiled.
3. `EXCEPTION_STACK_TRACE` should build on top of the
`ARCH_STACKWALK`, stack traces will be printed when it
is enabled.
4. `ARCH_STACKWALK_MAX_FRAMES` will be removed as it is
replaced by `ARCH_STACKWALK_MAX_FRAMES`
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
This will update the posix thread names to match
the zephyr thread names.
This will simplify debugging as the debugger will
recognize the thread names.
Signed-off-by: Rubin Gerritsen <rubin.gerritsen@nordicsemi.no>
Newer Xtensa toolchain needs to include xtensa-types.h so that
macros in tie.h can be used without compilation errors.
The exact verison needing this is unknown but first encountered
in RJ-2023.2. Tested with older toolchain and that did not
cause any compilation errors so just include xtensa-types.h
if xt-clang is used. Haven't seen newer toolchains being
generated with xcc, so skip that for now.
Note that Zephyr SDK and the public HAL in Zephyr do not provide
this file.
Signed-off-by: Anthony Giardina <anthony.giardina@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The r0 register holds the system_off function pointer. As r0 is a scratch
register, the pointer needs to moved to a preserved register before
branching to a (custom) marker function.
Furthermore, in accordance to rule 6.2.1.2 of aapcs32, the stack pointer
needs to align on 8 bytes. Hence r0 is pushed to the stack in addition to
the lr register, before calling the public interface of checking the
s2ram marker.
Signed-off-by: Hessel van der Molen <hvandermolen@dexels.com>
Relocate stack unwind backends from `arch/` to perf's
`backends/` folder, just like logging/shell/..
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
This commit fixes potential unpredictable behavior, caused by using
the ^ form of ldmia instruction, while exiting an exception in SMP
mode on Cortex-A/R.
Change:
Use "pop" instead of "ldmia" to restore user mode registers while
exiting from an exception via `z_arm_cortex_ar_exit_exc`.
Reason for change:
Processor mode is always set to system (MODE_SYS) before calling
`z_arm_cortex_ar_exit_exc` and hence, the user mode register can be
accessed directly without the ^ form of the instruction. Also, LDMIA
instruction is UNPREDICTABLE in SYStem mode.
Signed-off-by: Sudan Landge <sudan.landge@arm.com>
This commit fixes the unpredictable behavior, caused by using the
^ form of stmdb instruction, while entering an exception in SMP mode
on Cortex-A/R.
Change:
Use "push" instead of "stmdb" to store user mode registers on
stack while entering an exception in SYStem mode.
Reason for change:
As reported in discussion/#75339, processor is already in SYS mode
after entering `z_arm_cortex_ar_enter_exc()` in an exception and
using stmdb is UNPREDICTABLE in system mode. Also, the user mode
register can be accessed directly without the ^ form of the
instruction. The solution suggested to fix this is to use
`stmdb sp!, {r0-r3, r12, lr}` which can save the user registers,
update the SP and avoid an extra instruction.
We use "push {}" instruction instead since it is the preferred
mnemonic over `stmdb`.
Signed-off-by: Sudan Landge <sudan.landge@arm.com>
Port of similar change in arm64 that eliminates exclusive load/store
instructions, which may not work when MMU/MPU/cache are disabled.
Based on: 7904c6f0f3
Signed-off-by: Stan Skowronek <stan@corellium.com>
xtensa_mmu_init() is called really early in the boot process
where the _kernel struct has not yet been initialized, and
thus we cannot use it to determine if the current CPU is
the boot CPU. In some cases, this may skip the call to
initialize the page tables which leaves us with incorrect
page table entries. Fix it by using a static variable to
determine whether the page tables have been initialized so
we only do it once per boot.
Fixes#76909
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Implement stack trace function for x86_32 arch, that get required
thread register values and unwind stack with it.
Signed-off-by: Mikhail Kushnerov <m.kushnerov@yadro.com>
Implement stack trace function for x86_64 arch, that get required
thread register values and unwind stack with it.
Originally-by: Yonatan Goldschmidt <yon.goldschmidt@gmail.com>
Signed-off-by: Mikhail Kushnerov <m.kushnerov@yadro.com>
Implement stack trace function for riscv arch, that get required
thread register values and unwind stack with it.
Signed-off-by: Mikhail Kushnerov <m.kushnerov@yadro.com>
is_dblexc is constant for targets without MMU. There is no need
to check it when building without MMU.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
When the native simulator use was introduced,
the POSIX architecture and SOC were split in 2 versions:
One was the old version, which remained used by native_posix and the
other NATIVE_APPLICATION based targets.
The new version was a shim on top of the native simulator threading
and CPU start/stop emulation.
This was done to ensure no regressions were introduced in the old
targets while the native simulator was tested and matured.
The old SOC code was removed a small while after, and all
NATIVE_APPLICATION targets moved to use the shim version on top of the
native simulator.
Now we remove also the old arch code, so native_posix and its
NATIVE_APPLICATION kin, also use the native simulator NCT component
instead.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
custom arch_cpu_idle and arch_cpu_atomic_idle implementation was done
differently on different architectures. riscv implemented those as weak
symbols, xtensa used a kconfig and all other architectures did not
really care, but this was a global kconfig that should apply to all
architectures.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Clear the UNALIGN_TRP bit in the CCR register, if the config
CONFIG_TRAP_UNALIGNED_ACCESS is not set.
Despite the fact that the reset value of UNALIGN_TRP is 0, always clear
the bit. It is useful in double image systems. The new image can't rely
on settings left by the previous image.
Signed-off-by: Dawid Niedzwiecki <dawidn@google.com>
arch_kernel_init() was misused for all architecture initialization code
that is done in prep_c and prior to cstart on other architectures.
arch_kernel_init() is late in the init process and comes after EARLY
init level, making xtensa have a very special boot path.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
xtensa is the only architecutre doing thing differently and introduces
inconsistency in the init process and dependencies as we attemp to
cleanup init levels and remove misused of SYS_INIT.
Introduce prep_c for this architecture and align with other
architectures.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Account for the scenario when we are doing `esf`-based
unwinding from a function which doesn't have any callee.
In this case the `ra` is not saved on the stack and the
second function from the top of the frame could be missing.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
Add new config, ARCH_SUPPORTS_COREDUMP_THREADS, and
only enable it for ARM CORTEX M where the gdb server
can support it.
Signed-off-by: Mark Holden <mholden@meta.com>
According to the riscv's `arch.h`:
+------------+ <- thread.arch.priv_stack_start
| Guard | } Z_RISCV_STACK_GUARD_SIZE
+------------+
| Priv Stack | } CONFIG_PRIVILEGED_STACK_SIZE
+------------+ <- thread.arch.priv_stack_start +
CONFIG_PRIVILEGED_STACK_SIZE +
Z_RISCV_STACK_GUARD_SIZE
The start of the privilege stack should be:
`thread.arch.priv_stack_start + Z_RISCV_STACK_GUARD_SIZE`
Instead of
`thread.arch.priv_stack_start - CONFIG_PRIVILEGED_STACK_SIZE`
For the `end`, use the same equation of `top_of_priv_stack` in
the `arch_user_mode_enter()`
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
When RISCV_ALWAYS_SWITCH_THROUGH_ECALL is enabled, do_swap() enables PMP
checking in is_kernel_syscall.
If the PMP stack guard is triggered and do_swap() is called from the
fault handler, a PMP error occurs because the stack usage violates the
previous PMP setting.
Remove the stack guard setting during a stack overflow handler to allow
enabling PMP checking safely in fault handler.
Signed-off-by: Jimmy Zheng <jimmyzhe@andestech.com>
When RISCV_ALWAYS_SWITCH_THROUGH_ECALL is enabled, do_swap() enables PMP
checking in is_kernel_syscall.
If a user thread violates memory protection and do_swap() is called from
the fault handler, a PMP error occurs because the thread is in privileged
mode but still using the old user mode PMP setting.
Update the PMP setting to privileged mode for fault handler.
This also enables the stack guard for user thread's privileged stack in
fault handler.
Signed-off-by: Jimmy Zheng <jimmyzhe@andestech.com>
When 'arch_switch()' switches though Ecall, 'exception_depth' is
incorrectly added to the next thread because the current thread is updated
before arch_switch().
Add 'exception_depth' back to the previous thread when Ecall is called from
'arch_switch()'.
Signed-off-by: Jimmy Zheng <jimmyzhe@andestech.com>
Before this, stack protection would be effective only after switching to
the first thread.
Even before the first thread is created, the kernel init code uses the
IRQ stack to set things up. Let's make sure this is safeguarded as well.
This also fixes the incompatibility between CONFIG_RISCV_PMP and
CONFIG_RISCV_ALWAYS_SWITCH_THROUGH_ECALL, the later needing an exception
call to switch to the first thread and exception code assuming stack
guard is already set up in the PMP.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Calls to other function may clobber ip & lr too so these register need to
be added to the clobberlist.
r3 is not actually used in z_arm_switch_to_main_no_multithreading so it is
also removed from the clobber list.
Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
Sets the property `PROPERTY_OUTPUT_FORMAT` to `elf32-bigarm` when
`CONFIG_BIG_ENDIAN` is set to `y`.
Signed-off-by: Sigmund Klåpbakken <sigmundklaa@outlook.com>
When this bit is not set, it defaults to 0 (little endian). This
causes issues for big-endian devices, as data will be accessed using
little endian.
Signed-off-by: Sigmund Klåpbakken <sigmundklaa@outlook.com>
Update the description of the `INCLUDE_RESET_VECTOR` Kconfig so
that it is more clear to the user what it does.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
ZSR_DEPC_SAVE is being used to determine whether we are faulting
inside double exception if this is not zero. It is possible that
the boot ROM or custom startup code leaves this non-zero, which
would result in a fake triple fault. So clear it at boot. Note
that the zeroing is done in MMU init code as these triple
faults are not actual hardware ones but only semantics, and will
occur once MMU is enabled.
Fixes#75194
Signed-off-by: Daniel Leung <daniel.leung@intel.com>