Improve FPU trap debugging by showing the program counter (PC) of
instructions that trigger FPU access traps instead of potentially
stale saved FPU context data.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
CONFIG_FLASH_SIZE and CONFIG_FLASH_BASE_ADDRESS symbols were not defined in
native_sim even though it has a flash controller and flash defined.
Signed-off-by: Flavio Ceolin <flavio@hubble.com>
Select ARCH_SUPPORTS_COREDUMP_STACK_PTR on xtensa, and provide an
implementation for the arch_coredump_stack_ptr_get function.
Signed-off-by: Mark Holden <mholden@meta.com>
Do not directly include and use APIs from ksched.h outside of the
kernel. For now do this using more suitable (ipi.h and
kernel_internal.h) internal APIs until more cleanup is done.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
To support unprivileged mode (CONFIG_USERSPACE):
- Set unprivileged PAC key registers when system is in unprivileged
mode.
- Add `bti` after each svc call, to make sure that the indirect jumps on
`lr` while returning from an `svc` don't result in a usage fault.
Signed-off-by: Sudan Landge <sudan.landge@arm.com>
Add a config option to set unique PAC keys per thread and
make sure to retain them during context switch.
Signed-off-by: Sudan Landge <sudan.landge@arm.com>
As part of enabling PACBTI support:
- Add config options to enforce PAC and BTI features
- Enable these config options based on the branch protection choice
selected for `ARM_PACBTI`
- Enforce PACBTI, based on the new config options, by enabling
corresponding PACBTI bits in CONTROL register and in FVP.
Signed-off-by: Sudan Landge <sudan.landge@arm.com>
Rename and move PACBTI config options to common Kconfig so that
they could be re-used for arm64 in the future.
Signed-off-by: Sudan Landge <sudan.landge@arm.com>
z_prep_c does not return, mark it as such consistently across
architectures. We had some arches do that, others not. This resolves a
few coding guideline violations in arch code.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Do not use private API prefix and move to architecture interface as
those functions are primarily used across arches and can be defined by
the architecture.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Do not use private API prefix and move to architecture interface as
those functions are primarily used across arches and can be defined by
the architecture.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Do not use private API prefix and move to architecture interface as
those functions are primarily used across arches and can be defined by
the architecture.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Do not use private API prefix and move to architecture interface as
those functions are primarily used across arches and can be defined by
the architecture.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Cleanup init.c code and move early boot code into arch/ and make it
accessible outside of the boot process/kernel.
All of this code is not related to the 'kernel' and is mostly used
within the architecture boot / setup process.
The way it was done, some soc code was including kernel_internal.h
directly, which shouldn't be done.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Move under arch, as this is not a kernel feature really. arch also
matches the test idcentifier in place.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Not really a kernel feature, more for architecture, which is reflected
in how XIP is enabled and tested. Move it to architecture code to keep
which much of the 'implementation' and usage is.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Introduce a new Kconfig option CPU_CORTEX_A78 to enable support for the
Arm Cortex-A78 CPU architecture within Zephyr. This configuration can be
selected by boards or SoCs that utilize the Cortex-A78 core, enabling
architecture-specific features and optimizations as needed.
Signed-off-by: Appana Durga Kedareswara rao <appana.durga.kedareswara.rao@amd.com>
- No more need for special IRQ shadow stacks - just reuse the one
created for z_interrupt_stacks;
- Add the linker sections for the pairs of stack/shadow stack;
- Support shadow stack arrays.
Last item was a bit challenging: shadow stacks need to be initialised
before use, and this is done statically for normal shadow stacks. To
initialise the shadow stacks in the array, one needs how many entries it
has. While a simple approach would use `LISTIFY` to them do the
initialization on all entries, that is not possible as many stack arrays
are created using expressions instead of literals, such as
`CONFIG_MP_MAX_NUM_CPUS - 1`, which won't work with `LISTIFY`.
Instead, this patch uses a script, `gen_static_shstk_array.py` that
gathers all needed information and patches the ELF to initialize the
stack arrays. Note that this needs to be done before any other operation
on the ELF file that creates new representations, such as the .bin
output.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
It seems that, at least on tests, it's common to call k_thread_create()
on a thread multiple times. This trips a check for the CET shadow stack
- namely, set a shadow stack on a thread which already has a shadow
stack.
This patch adds a Kconfig option to allow that, iff the base address and
size of the new shadow stack are the same as before. This will trigger a
reset of the shadow stack, so it can be reused.
It may be the case that this behaviour (reusing threads) is more common
than only for tests, in which case it could make sense to change the
default - in this patch, is only true if ZTEST.
Even if being enabled by default becomes the reality, it would still
make sense to keep this option - more conscious apps could avoid the
need for the shadow stack reset code altogether.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
So that kernel created threads can use shadow stacks. Note that
CONFIG_X86_CET_SHADOW_STACK is abandoned in favour of
CONFIG_HW_SHADOW_STACK.
This means change some types, functions and macro throughout shadow
stack code.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
In order to allow kernel created threads (such as main and idle threads)
to make use of hardware shadow stack implementation, add an interface
for them.
This patch basically provides an infra that architectures need to
implement to provide hardware shadow stack.
Also, main and idle threads are updated to make use of this interface
(if hardware shadow stacks are enabled).
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Some SoCs may need to do some preparatory work before changing the
current shadow stack pointer (and thus, currently used shadow stack).
This patch adds a way for that, shielded by a Kconfig
(CONFIG_X86_CET_SOC_PREPARE_SHADOW_STACK_SWITCH).
As currently only 32 bit SoC may use this, support is only added to the
32 bit code.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Most notable difference on base support is the need to keep the shadow
stack tokens, which are 8 bytes, 8 bytes aligned. Some helper macros are
used for that.
Also, an `ssp` entry is added to the task state segment (TSS).
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Currently, it's permitted to have threads that don't have a shadow
stack. When those are run, shadow stack is disabled on the CPU. To
identify those, the thread `shstk_addr` member is checked.
This patch adds an optional check, behind
CONFIG_X86_CET_VERIFY_KERNEL_SHADOW_STACK, that checks if an outgoing
thread has this pointer NULL with shadow stack currently enabled on
the CPU, meaning a 1) bug or 2) some attempt to tamper with the pointer.
If the check fails, k_panic() is called. Note that this verification is
not enough to guarantee `shstk_addr` can't be tampered with. For
instance, it only works on a running thread. Ideally, all threads should
be shadow stack capable, so a missing `shstk_addr` would simply be a
hard fault, but that is still to come.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Main peculiarity is that if an exception results in current thread being
aborted, we need to clear the busy bit on the shadow stack on the swap
to the new thread, otherwise future exceptions will fail when trying to
use a busy shadow stack.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Nested interrupts are supported, on the normal stack, by creating a
stack whose size is a multiple of CONFIG_ISR_DEPTH, and updating the
pointer used by Interrupt Stack Table (IST) to point to a new base,
inside the "oversized" stack.
The same approach is used for the shadow stack: shadow stack size is
multiplied by CONFIG_ISR_DEPTH, and the pointer to the stack on the
shadow stack pointer table is update to point to the next base.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
For IRQs, shadow stack support a mechanism similar to the Interrupt
Stack Table (IST) for x86_64: a table, indexed by the IST index, pointing
to a 64 byte table in memory containing the address of seven shadow stacks
to be used by the interrupt service routines.
This patch adds support to this mechanism. It is worth noting that, as
Zephyr may exit from an interrupt by going to a different thread than
the one that was interrupted, some housekeeping is done to ensure that
the necessary shadow stack tokens are on the right shadow stack before
return from the interrupt.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Shadow Stack is one of the capabilities provided by Intel Control-flow
Enforcement Technology (CET), aimed at defending against Return Oriented
Programming.
This patch enables it for x86_64 (32-bit support coming in future
patches):
- Add relevant Kconfigs;
- Shadow stacks should live in specially defined memory pages, so
gen_mmu.py was updated to allow that;
- A new macro, Z_X86_SHADOW_STACK_DEFINE, added to define the area
for a shadow stack;
- A new function, z_x86_thread_attach_shadow_stack(), added to
attach a shadow stack to a never started thread;
- locore.S changed to enable/disable shadow stack when a thread
using it comes in/out of execution.
As not all threads are currently shadow stack capable, threads that do
not use it will still run with shadow stack disabled. Ideally, at some
point in the future, all threads would use the shadow stack, so no need
to disable it at all.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Add code to enable it and sprinkle `endbr64` on asm code, where needed.
Namely, IRQs and excepts entrypoints.
Finally, tests added to ensure IBT behaves sanely.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Indirect Branch Tracking (IBT) is one of the capabilities provided by
Intel Control-flow Enforcement Technology (CET), aimed at defending
against Jump/Call Oriented Programming.
This patch enables it for x86 (32-bit, 64-bit support coming in future
patches):
- Add relevant Kconfigs (everything is behind X86_CET);
- Code to enable it;
- Enable compiler flags to enable it;
- Add `endbr32` instructions to asm code, where needed.
Points in the code where an indirect branch is expected to land need
special instructions that tell the CPU they are valid indirect branch
targets. Those are added by the compiler, so toolchain support is
necessary. Note that any code added to the final ELF also need those
markers, such as libc or libgcc.
Finally, tests added to ensure IBT behaves sanely.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
This adds exception handling of control protection exception
in fatal code.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Commit f9168ae464 made all non-cached memory
loadable by default.
However as nocache memory is typically used for reserving larger buffers to
be shared between peripherals, this comes at fairly large cost towards ROM
usage.
This commit creates two distinct sections for both loadable and
non-loadable nocache memory sections.
Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
The RXv2, RXv3 feature with a configuration to be able to relocate
the exception vector table by setting the extb register in the
CPU, this commit support to enable the config and code handling
for it
Signed-off-by: Duy Nguyen <duy.nguyen.xa@renesas.com>
Fix an issue where 1 vector is being requested when MSI-X is
enabled. The previous logic always assumed the PCIE device has
only fixed or single MSI when we are requesting 1 vector, which
is not entirely correct. So if there is no vector allocated
already, try to allocate one.
Fixes#93319
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
arch_pcie_msi_vectors_allocate() has a return type of uint8_t.
One of the error path returns -1 which would result in 255
being returned. So fix that by returning 0 instead, as there is
no vector being allocated anyway.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
If the GDB stub is enabled the exception handler will jump to the GDB
stub to allow remote GDB debugging.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
GCC 14.3 will happily delete any code that appears before
__builtin_unreachable that isn't separated with an obvious branch. That
includes __asm__ statements, even those which generate traps.
The failure case that I debugged was on x86 in
z_check_stack_sentinel. There is a store to restore the sentinel to the
correct value just before the ARCH_EXCEPT, and that macro emits 'int $32'
followed by CODE_UNREACHABLE. Because the compiler didn't understand that
ARCH_EXCEPT was changing execution flow, it decided that the sentinel
restoring store 'couldn't' be reached and elided it.
I added the "memory" clobber to the asm statement in ARCH_EXCEPT before
CODE_UNREACHABLE to enforce that all pending store operations be performed
before the asm statement occurs. This ensures that they are not deleted by
the compiler.
I think this might be a GCC bug. The GCC documentation explicitly documents
that asm statements which change the flow of control should be followed by
__builtin_unreachable.
Signed-off-by: Keith Packard <keithp@keithp.com>
Added helper Kcoinfig option USE_ISR_WRAPPER which can be used to
include isr_wrapper even if GEN_SW_ISR_TABLE is not enabled. This
is needed to enable configurations where only IRQ vector table is
used with multithreading (only direct isr used). This change is
backward compatibible with previous config.
Signed-off-by: Łukasz Stępnicki <lukasz.stepnicki@nordicsemi.no>
Add new API to save and restore SCB context. This is typically useful when
entering and exiting suspend-to-RAM low-power modes.
The scb_context_t and the backup/restore functions are designed to only
handle SCB registers that are:
- Mutable: Their values can be changed by software.
- Configurable: They control system behavior or features.
- Stateful: Their values represent a specific configuration that an
application might want to preserve and restore.
Registers excluded from backup/restore are:
1. CPU/feature identification registers
Motivation: These registers are fixed in hardware and read-only.
2. ICSR (Interrupt Control and State Register)
Motivation: Most bits of ICSR bits are read-only or write-only
and represent volatile system state. STTNS is the only read-write
field and could be considered part of the system state, but it is
only present on certain ARMv8-M CPUs, and Zephyr does not use it.
3. CFSR (Configurable Fault Status Register)
HFSR (HardFault Status Register)
DFSR (Debug Fault Status Register)
AFSR (Auxiliary Fault Status Register)
MMFAR (MemManage Fault Address Register)
BFAR (BusFault Address Register)
Motivation: These registers are read/write-one-to-clear and
contain only fault-related information (which is volatile).
Co-authored-by: Mathieu Choplain <mathieu.choplain@st.com>
Signed-off-by: Michele Sardo <msmttchr@gmail.com>
Signed-off-by: Mathieu Choplain <mathieu.choplain@st.com>
- There are linker file directives that must come at the
start of the noinit region. For example, the directive
that allow that section to not exist in RAM before a
certain address (. = MAX(ABSOLUTE(.), 0x34002000);).
- Before this update, those could only be added to the end
of that region. They will now have the option to be at the
beginning or the end.
Signed-off-by: Bill Waters <bill.waters@infineon.com>
More complex suspend and resume scheme might require exactly defined
location of this variable due to platform peculiar SW and HW requirement.
DTS zephyr,memory-region node with nodelabel `pm_s2ram` shall be used to
automatic definition of linker section for such objective.
Signed-off-by: Andrzej Puzdrowski <andrzej.puzdrowski@nordicsemi.no>
Currently this directive is not supported in EWARM 9.70.1,
it will be in future versions, but we want Zephyr 4.2
to work with IAR EWARM 9.70.1.
Signed-off-by: Robin Kastberg <robin.kastberg@iar.com>