We get the following error when building with arm-clang:
error: non-ASM statement in naked function is not supported
__TZ_WRAP_FUNC(preface, foo1, postface);
^
tests/arch/arm/arm_tz_wrap_func/src/main.c:69:25: note: attribute is here
uint32_t __attribute__((naked)) wrap_foo1(uint32_t arg1, uint32_t arg2,
^
1 error generated.
Remove the do/while wrapper to make this a true naked function.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
Revert commit 44628735b8
This commit broke the ability for nxp rt series to
reset except with power cycle
Signed-off-by: Declan Snyder <declan.snyder@nxp.com>
Current implementation of cache management APIs for ARM only applies to
Cortex-M, so move it to its own directory.
Signed-off-by: Manuel Argüelles <manuel.arguelles@nxp.com>
Fixes unneeded chain of includes. Since zefi is built separately
(using python script), any dependency creates include chain with
possibly missing configuration options.
Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
Update current stack limit on every context switch, including switching
to irq stack and switching back to thread stack.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
This commit mainly enable the safe exception stack including the stack
switch. Init the safe exception stack by calling
z_arm64_safe_exception_stack during the boot stage on every core. Also,
tweaks several files to properly switch the mode with different cases.
1) The same as before, when executing in userspace, SP_EL0 holds the
user stack and SP_EL1 holds the privileged stack, using EL1h mode.
2) When entering exception from EL0 then SP_EL0 will be saved in the
_esf_t structure. SP_EL1 will be the current SP, then retrieves the safe
exception stack to SP_EL0, making sure the always pointing to safe
exception stack as long as the system running in kernel space.
3) When exiting exception from EL1 to EL0 then SP_EL0 will be restored
from the stack value previously saved in the _esf_t structure. Still at
EL1h mode.
4) Either entering or exiting exception from EL1 to EL1, SP_EL0 will
keep holding the safe exception stack unchanged as memtioned above.
5) Do a quick stack check every time entering the exception from EL1 to
EL1. If check fail, set SP_EL1 to safe exception stack, and then handle
the fatal error.
Overall, the exception from user mode will be handled with kernel stack
at the assumption that it is impossible the stackoverflow happens at the
entry of exception from EL0 to EL1. However the exception from kernel
mode will be firstly checked with the safe exception stack to see if the
kernel stack overflows, because the exception might be triggered by
stack invalid accessing.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Add safe exception stack init function which does several things:
1) setting current cpu safe exception stack pointer to its corresponding
stack top.
2) init sp_el0 with the above safe exception stack.
That makes sure the sp_el0 points to per-cpu safe_stack in the kernel
space.
3) init the current_stack_limit and corrupted_sp with 0
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
As the preparation for enabling safe exception stack, add a variable in
_esf_t to save the user stack held by sp_el0 at the point of the
exception happening from EL0. The newly added variable in _esf_t is
named sp from which the user stack will be restored when exceptions eret
to EL0.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Add three per-cpu variables for the convenience of quickly accessing.
The safe_exception_stack stores the top of safe exception stack pointer.
The current_stack_limit stores the current thread's priv stack limit.
The corrputed_sp stores the priv sp or irq sp for the stack overflow
case, or 0 for the normal case.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Introduce two configs to prepare to enable the safe exception stack for
the kernel space. This is the preparation for enabling hardware stack
guard. Also define the safe exception stack for kernel exception stack
check.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
If so this is most certainly a bug. arch_mem_unmap() should be
used before mapping the same area again.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
First, we have commit 7d27bd0b85 ("arch: arm64: Disable infinite
recursion warning for `discard_table`") that blindly shut up a compiler
warning that did actually highlighted a real bug. Revert that and fix
the bug properly. And yes, mea culpa for having been the first to
approve that commit, or even creating the bug in the first place.
Then let's add proper table usage cound handling for discard_table() to
work properly and avoid leaking table pages.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
For RISCV arch, enable FLASH_SIZE and FLASH_BASE_ADDRESS config.
To avoid duplicated work, remove flash config from RISCV soc.
Signed-off-by: Jonas Otto <jonas@jonasotto.com>
The image header is compatible for zImage(32) protocol.
Offset Value Description
0x24 0x016F2818 Magic number to identify ARM Linux zImage
0x28 start address The address the zImage starts at
0x2C end address The address the zImage ends at
As Zephyr can be built with a fixed load address, Xen/Uboot can read
the image header and decide where to copy the Zephyr image.
Also, it is to be noted that for AArch32 A/R, the vector table should
be aligned to 0x20 address. Refer ARM DDI 0487I.a ID081822, G8-9815,
G8.2.168, VBAR, Vector Base Address Register :-
Bits[4:0] = RES0.
For AArch32 M (Refer DDI0553B.v ID16122022, D1.2.269, VTOR, Vector Table
Offset Register), Bits [6:0] = RES0.
As zImage header occupies 0x30 bytes, thus it is necessary to align
the vector table base address to 0x80 (which satisfies both VBAR and
VTOR requirements).
Also, it is to be noted that not all the AArch32 M class have VTOR, thus
ARM_ZIMAGE_HEADER header depends on
CPU_AARCH32_CORTEX_R || CPU_AARCH32_CORTEX_A || CPU_CORTEX_M_HAS_VTOR.
The reason being the processors which does not have VBAR or VTOR, needs
to have exception vector table at a fixed address in the beginning of
ROM (Refer the comment in arch/arm/core/aarch32/cortex_m/CMakeLists.txt)
. They cannot support any headers.
Also, the first instruction in zImage header is to branch to the kernel
start address. This is to support booting in situations where the zImage
header need not be parsed.
In case of Arm v8M, the first two entries in the reset vector should be
"Initial value for the main stack pointer on reset" and "Start address
for the reset handler" (Refer Armv8M DDI0553B.vID16122022, B3.30,
Vector tables).
In case of Armv7M (ARM DDI 0403E. ID021621, B1.5.3 The vector table),
the first entry is "SP_main. This is the reset value of the Main stack
pointer.".
Thus when v7M or v8M starts from reset, it expects to see these values
at the default reset vector location.
See the following text from Armv7M (ARM DDI 0403E. ID021621, B1-526)
"On powerup or reset, the processor uses the entry at offset 0 as the
initial value for SP_main..."
Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Add missing include to prevent `'EINVAL' undeclared` when
using `CONFIG_NULL_POINTER_EXCEPTION_DETECTION_DWT=y`
Signed-off-by: George Ruinelli <caco3@ruinelli.ch>
FP16 isn't something that is supported on Cortex-M so limit the
Kconfig feature to Cortex-A or Cortex-R.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
The CONFIG_ROM_START_OFFSET is supposed to be added to
the current when linking, instead of having the current
address set to it. So fix that.
Not sure why it worked up to this point, but llvm/clang/lld
complained that it could not move location counter backward.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Introduce an optional hook to be called when the CPU is made idle.
If needed, this hook can be used to prevent the CPU from actually
entering sleep by skipping the WFE/WFI instruction.
Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
Looks like some implementors decided not to implement the full set of
PMP range matching modes. Let's rearrange the code so that any of those
modes can be disabled.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Let's honor CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT even for kernel
stacks. This saves one global PMP slot when creating the guard area for
the IRQ stack, and some hw implementations might require that anyway.
With this changes, arch_mem_domain_max_partitions_get() becomes much
more reliable and tests/kernel/mem_protect is more likely to pass even
with the stack guard enabled.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Additional privileged stack space is used by peripheral emulators when
userspace is enabled. This is largely due to additional function calls and
data structures allocated on the stack. This can potentially lead to stack
smashing if the privileged stack size isn't high enough, causing an
exception.
Increase the privileged stack space when userspace and peripheral emulation
are enabled.
Signed-off-by: Aaron Massey <aaronmassey@google.com>
When CONFIG_SOC_ISR_SW_UNSTACKING is defined, it's up to the custom soc
code to remove the ESF, because the software-managed part of the ESF is
depending on the hardware. Fix this in the ISR code.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Some implementations may not capture the faulting instruction in mtval
and set it to zero when an illegal instruction fault is raised This is
notably the case with QEMU version 7.0.0 when a CSR instruction is
involved.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The FRCSR, FSCSR, FRRM, FSRM, FSRMI, FRFLAGS, FSFLAGS and FSFLAGSI
are in fact CSR instructions targeting the fcsr, frm and fflags
registers. They should be caught as FPU instructions as well.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
- IRQ state for the interrupted context corresponds to the PIE bit not
the IE bit.
- Restoring the saved FPU state should clear the entire field before
or'ing wanted bits in.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
For RISC-V, the reg property of a cpu node in the devicetree describes
the low level unique ID of each hart. Using devicetree macro's, a list
of all cpus with status "okay" can be generated.
Using devicetree overlays, a hart or multiple harts can be marked as
"disabled", thus excluding them from the list. This allows platforms
that have non-zero indexed SMP capable harts to be functionally mapped
to Zephyr's sequential CPU numbering scheme.
On kernel init, if the application has MP_MAX_NUM_CPUS greater than 1,
generate the list of cpu nodes from the device tree with status "okay"
and map the unique hartid's to zephyr cpu's
While we are at it, as the hartid is the value that gets passed to
z_riscv_secondary_cpu_init, use that as the variable name instead of
cpu_num
Signed-off-by: Conor Paxton <conor.paxton@microchip.com>
RISC-V multi-hart systems that employ a heterogeneous core complex are
not guaranteed to have the smp capable harts starting with a unique id
of zero, matching Zephyr's sequential zero indexed cpu numbering scheme.
Add option, RV_BOOT_HART to choose the hart to boot from.
On reset, check the current hart equals RV_BOOT_HART: if so, boot first
core. else, loop in the boot secondary core and wait to be brought up.
For multi-hart systems that are not running a Zephyr mp or smp
application, park the non zephyr related harts in a wfi loop.
Signed-off-by: Conor Paxton <conor.paxton@microchip.com>
Add an option to generate simplified error codes instead of more
specific architecture specific error codes. Enable this by default in
tests to make exception tests more generic across hardware.
Fixes#54053.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Disables allowing the python argparse library from automatically
shortening command line arguments, this prevents issues whereby
a new command is added and code that wrongly uses the shortened
command of an existing argument which is the same as the new
command being added will silently change script behaviour.
Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
Commit 4f9b547ebd ("riscv: smp: prepare for more than one IPI type")
didn't clear pending IPI flags.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
We can leverage the FPU dirty state as an indicator for preemptively
reloading the FPU content when a thread that did use the FPU before
being scheduled out is scheduled back in. This avoids the FPU access
trap overhead when switching between multiple threads with heavy FPU
usage.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
FPU context switching is always performed on demand through the FPU
access exception handler. Actual task switching only grants or denies
FPU access depending on the current FPU owner.
Because RISC-V doesn't have a dedicated FPU access exception, we must
catch the Illegal Instruction exception and look for actual FP opcodes.
There is no longer a need to allocate FPU storage on the stack for every
exception making esf smaller and stack overflows less likely.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Instead of saving/restoring FPU content on every exception and task
switch, this replaces FPU sharing support with a "lazy" (on-demand)
context switching algorithm similar to the one used on ARM64.
Every thread starts with FPU access disabled. On the first access the
FPU trap is invoked to:
- flush the FPU content to the previous thread's memory storage;
- restore the current thread's FPU content from memory.
When a thread loads its data in the FPU, it becomes the FPU owner.
FPU content is preserved across task switching, however FPU access is
either allowed if the new thread is the FPU owner, or denied otherwise.
A thread may claim FPU ownership only through the FPU trap. This way,
threads that don't use the FPU won't force an FPU context switch.
If only one running thread uses the FPU, there will be no FPU context
switching to do at all.
It is possible to do FP accesses in ISRs and syscalls. This is not the
norm though, so the same principle is applied here, although exception
contexts may not own the FPU. When they access the FPU, the FPU content
is flushed and the exception context is granted FPU access for the
duration of the exception. Nested IRQs are disallowed in that case to
dispense with the need to save and restore exception's FPU context data.
This is the core implementation only to ease reviewing. It is not yet
hooked into the build.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Right now this is hardcoded to z_sched_ipi(). Make it so that other IPI
services can be added in the future.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
If running under Xtensa simulator, it is good to tell simulator
to stop execution once we reach double exception, as the current
double exception handler is simply an endless loop. If we turn
on tracing in the simulator, the output file would contain
an infinite iteration of this endless loop, and the simulator
needs to be stopped manually before the file size goes out of
control. So we need to tell the simulator to stop once
we reach this point instead of doing an endless loop.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Given the Zephyr CPU number is no longer tied to the hartid, we must
consider the actual hartid when sending an IPI to a given CPU. Since
those hartids can be anything, let's just save them in the cpu structure
as each CPU is brought online.
While at it, throw in some `get_hart_msip()` cleanups.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Currently it is assumed that Zephyr CPU numbers match their hartid
value one for one. This assumption was relied upon to efficiently
retrieve the current CPU's `struct _cpu` pointer.
People are starting to have systems with a mix of different usage for
each CPU and such assumption may no longer be true.
Let's completely decouple the hartid from the Zephyr CPU number by
stuffing each CPU's `struct _cpu` pointer in their respective scratch
register instead. `arch_curr_cpu()` becomes more efficient as well.
Since the scratch register was previously used to store userspace's
exception stack pointer, that is now moved into `struct _cpu_arch`
which implied minor user space entry code cleanup and rationalization.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Ultil now Cortex A/R aarch32 implementation for context
switching expects that interrupts was disabled. This is
true if a context switching happens at thread context.
But if a context switching happens at last action during
interrupt context, this assumption is not true because the
interrupts are still enabled (to allow nesting interrupts).
Disable interrupts at the last interrupt action to ensure
the interrupts are always disabled before context switching
is processed
Signed-off-by: Dat Nguyen Duy <dat.nguyenduy@nxp.com>
In platforms where the linker is capable of doing global optimizations,
like relaxing address mode and synthesize new instructions, Zephyr has to
disable it when enabling USERSPACE since the build expects that address
don't change after the first stage build.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>