Work around an issue where the emulator ignores host OS
signals when inside a `wfi` instruction.
This should be reverted once this has been addressed in the
AARCH64 build of QEMU in the SDK.
See https://github.com/zephyrproject-rtos/sdk-ng/issues/255
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
When _arch_switch() API is used, the tracing of the thread swapped out
is done in the C kernel code (in do_swap() for cooperative scheduling
and in set_current() during preemption). In the assembly code we only
have to trace the thread when swapped in.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Cortex-M SoCs implement (optionally) the Data Watchpoint and
Tracing Unit (DWT), which can be used for timing functions.
Select the corresponding ARCH capability if the SoC implements
the DWT.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This code had one purpose only, feed timing information into a test and
was not used by anything else. The custom trace points unfortunatly were
not accurate and this test was delivering informatin that conflicted
with other tests we have due to placement of such trace points in the
architecture and kernel code.
For such measurements we are planning to use the tracing functionality
in a special mode that would be used for metrics without polluting the
architecture and kernel code with additional tracing and timing code.
Furthermore, much of the assembly code used had issues.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Switch to the _arch_switch() API that is required for an SMP-aware
scheduler instead of using the old arch_swap mechanism.
SMP is not supported yet but this is a necessary step in that direction.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Provide a TZ_SAFE_ENTRY_FUNC() macro for wrapping non-secure entry
functions in calls to k_sched_lock()/k_sched_unlock()
Provide a __TZ_WRAP_FUNC() macro which helps in creating a function
that "wraps" another in a preface and postface function call.
int foo(char *arg); // Implemented somewhere else.
int __attribute__((naked)) foo_wrapped(char *arg)
{
WRAP_FUNC(bar, foo, baz);
}
is equivalent to
int foo(char *arg); // Implemented somewhere else.
int foo_wrapped(char *arg)
{
bar();
int res = foo(arg);
baz();
return res;
}
This commit also adds tests for __TZ_WRAP_FUNC().
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
The same code was being copypasted in k_thread_abort()
implementations, just move into z_thread_single_abort().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This isn't needed; match the vanilla implementation
in kernel/thread_abort.c and do this unlocked. This
should improve system latency.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
A check was being done that was a more obscure way of
calling arch_is_in_isr(). Add a comment explaining
why we need to trigger PendSV.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We implement an ARM-only API for ARM Secure Firmware,
to set all NVIC IRQ lines to target the Non-Secure state.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
we modify the ARM Cortex-M only API for managing the
security target state of the NVIC IRQs. We remove the
internal ASSERT checking allowing to call the API for
non-implemented NVIC IRQ lines. However we still give the
option to the user to check the success of the IRQ target
state setting operation by allowing the API function to
return the resulting target state.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Now that device_api attribute is unmodified at runtime, as well as all
the other attributes, it is possible to switch all device driver
instance to be constant.
A coccinelle rule is used for this:
@r_const_dev_1
disable optional_qualifier
@
@@
-struct device *
+const struct device *
@r_const_dev_2
disable optional_qualifier
@
@@
-struct device * const
+const struct device *
Fixes#27399
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
To debug hard-to-reproduce faults/panics, it's helpful to get the full
register state at the time a fault occurred. This enables recovering
full backtraces and the state of local variables at the time of a
crash.
This PR introduces a new Kconfig option, CONFIG_EXTRA_EXCEPTION_INFO,
to facilitate this use case. The option enables the capturing of the
callee-saved register state (r4-r11 & exc_return) during a fault. The
info is forwarded to `k_sys_fatal_error_handler` in the z_arch_esf_t
parameter. From there, the data can be saved for post-mortem analysis.
To test the functionality a new unit test was added to
tests/arch/arm_interrupt which verifies the register contents passed
in the argument match the state leading up to a crash.
Signed-off-by: Chris Coleman <chris@memfault.com>
Saves us a few bytes of program text on arches that don't need
these implemented, currently all uniprocessor MPU-based systems.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
All of these should be no-ops for the following reasons:
1. User threads cannot configure memory domains, only supervisor
threads.
2. The scope of memory domains is user thread memory access,
supervisor threads can access the entire memory map.
Hence it's never required to reprogram the MPU when a memory domain
API is called.
Fixes a problem where an assertion would fail if a supervisor thread
added a partition and then immediately removes it, and possibly
other problems.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This adds the necessary bits in arch code, and Python scripts
to enable coredump support for ARM Cortex-M.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Make explicit what registers we are going to be touched / modified when
using z_arm64_enter_exc and z_arm64_exit_exc.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The default implementation is the same as this custom
one now, as the assertion that the context switch occurs
at the end of the ISR is true for all arches.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
unify how XIP is configured across architectures. Use imply instead of
setting defaults per architecture and imply XIP on riscv arch and remove
XIP configuration from individual defconfig files to match other
architectures.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This commit adds the support for HW Stack Protection when
building Zephyr without support for multi-threading. The
single MPU guard (if the feature is enabled) is set to
guard the Main stack area. The stack fail check is also
updated.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
For the case of building Zephy with no-multithreading
support (CONFIG_MULTITHREADING=n) we introduce a
custom (ARCH-specific) function to switch to main()
from cstart(). This is required, since the Cortex-M
initialization code is temporarily using the interrupt
stack and main() should be using the z_main_stack,
instead. The function performs the PSP switching,
the PSPLIM setting (for ARMv8-M), FPU initialization
and static memory region initialization, to mimic
what the normal (CONFIG_MULTITHREADING=y) case does.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
We extract the common code for both multithreading and
non-multithreading cases into a common static function
which will get called in Cortex-M archictecture initialization.
This commit does not introduce behavioral changes.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This patch is simply adding the guard area (if applicable) to
the calculations for the size of the interrupt stack in reset.S
for ARM Cortex-M architecture. If exists, the GUARD area is
always reserved aside from CONFIG_ISR_STACK_SIZE, since the
interrupt stack is defined using the K_KERNEL_STACK_DEFINE.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.
Two new arch defines are introduced:
- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN
New public declaration macros:
- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF
If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.
Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This now takes a stack pointer as an argument with TLS
and random offsets accounted for properly.
Based on #24467 authored by Flavio Ceolin.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The core kernel computes the initial stack pointer
for a thread, properly aligning it and subtracting out
any random offsets or thread-local storage areas.
arch_new_thread() no longer needs to make any calculations,
an initial stack frame may be placed at the bounds of
the new 'stack_ptr' parameter passed in. This parameter
replaces 'stack_size'.
thread->stack_info is now set before arch_new_thread()
is invoked, z_new_thread_init() has been removed.
The values populated may need to be adjusted on arches
which carve-out MPU guard space from the actual stack
buffer.
thread->stack_info now has a new member 'delta' which
indicates any offset applied for TLS or random offset.
It's used so the calculations don't need to be repeated
if the thread later drops to user mode.
CONFIG_INIT_STACKS logic is now performed inside
z_setup_new_thread(), before arch_new_thread() is called.
thread->stack_info is now defined as the canonical
user-accessible area within the stack object, including
random offsets and TLS. It will never include any
carved-out memory for MPU guards and must be updated at
runtime if guards are removed.
Available stack space is now optimized. Some arches may
need to significantly round up the buffer size to account
for page-level granularity or MPU power-of-two requirements.
This space is now accounted for and used by virtue of
the Z_THREAD_STACK_SIZE_ADJUST() call in z_setup_new_thread.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
MISRA-C wants the parameter names in a function implementaion
to match the names used by the header prototype.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This interface is documented already in
kernel/include/kernel_arch_interface.h
Other architectural notes were left in place except where
they were incorrect (like the thread struct
being in the low stack addresses)
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
arch_new_thread() passes along the thread priority and option
flags, but these are already initialized in thread->base and
can be accessed there if needed.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
In CPUs with VTOR we are free to place the relay vector table
section anywhere inside ROM_START section (as long as we respect
alignment requirements). This PR moves the relay table towards
the end of ROM_START. This leaves sufficient area for placing
some SoC-specific sections inside ROM_START that need to start
at a fixed address.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
_vector_table and __vector_relay_table symbols were exported with GTEXT
(i.e. as functions). That resulted in bit[0] being incorrectly set in
the addresses they represent (for functions this bit set to 1 specifies
execution in Thumb state).
This commit corrects this by switching to exporting these objects as
objects, i.e. with GDATA.
Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
MISRA-C directive 4.10 requires that files being included must
prevent itself from being included more than once. So add
include guards to the offset files, even though they are C
source files.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Race conditions exist when remapping the NXP MPU. When writing the
start, end, or attribute registers of a MPU descriptor, the hardware
will automatically clear the region's valid bit. If that region gets
accessed before the code is able to set the valid bit, the core will
fault.
Issue #20595 revealled this problem with the code in region_init()
when the compiler options are set to no optimizations. The code
generated by the compiler put local variables on the stack and then
read those stack based variables when writing the MPU descriptor
registers. If that region mapped the stack a memory fault would occur.
Higher compiler optimizations would store these local variables in
CPU registers which avoided the memory access when programming the
MPU descriptor.
Because the NXP MPU uses a logic OR operation of the MPU descriptors,
the fix uses the last descriptor in the MPU hardware to remap all of
dynamic memory for access instead of the first of the dynamic memory
descriptors as was occuring before. This allows reprogramming of the
primary discriptor blocks without having a memory fault. After all
the dynamic memory blocks are mapped, the unused blocks will have
their valid bits cleared including this temporary one, if it wasn't
alread changed during the mapping of the current set.
Fixes#20595
Signed-off-by: David Leach <david.leach@nxp.com>
Zephyr applications will always use the VTOR register when it is
available on the CPU and the register will always be configured
to point to applications vector table during startup.
SW_VECTOR_RELAY_CLIENT is meant to be used only on baseline ARM cores.
SW_VECTOR_RELAY is intended to be used only by the bootloader.
The bootloader may configure the VTOR to point to the relay table
right before chain-loading the application.
Signed-off-by: Rafał Kuźnia <rafal.kuznia@nordicsemi.no>
Select either SW_VECTOR_RELAY or SW_VECTOR_RELAY_CLIENT
at the time.
Removed #ifdef-ry in irq_relay.S as SW_VECTOR_RELAY was
refined so it became reserved for the bootloader and it
conditionally includes irq_relay.S for compilation.
See SHA #fde3116f19
Signed-off-by: Andrzej Puzdrowski <andrzej.puzdrowski@nordicsemi.no>
Signed-off-by: Rafał Kuźnia <rafal.kuznia@nordicsemi.no>
This patch allows the `SW_VECTOR_RELAY` and
`SW_VECTOR_RELAY_CLIENT` pair to be
enabled on the ARMv7-M and ARMv8-M architectures
and covers all additional interrupt vectors.
Signed-off-by: Rafał Kuźnia <rafal.kuznia@nordicsemi.no>
Signed-off-by: Andrzej Puzdrowski <andrzej.puzdrowski@nordicsemi.no>